00:00:00.000 Started by upstream project "autotest-per-patch" build number 132400 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.226 Using shallow fetch with depth 1 00:00:00.227 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.227 > git --version # timeout=10 00:00:00.273 > git --version # 'git version 2.39.2' 00:00:00.273 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.305 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.305 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.307 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.321 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.337 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.338 > git config core.sparsecheckout # timeout=10 00:00:06.349 > git read-tree -mu HEAD # timeout=10 00:00:06.368 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.391 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.392 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.502 [Pipeline] Start of Pipeline 00:00:06.517 [Pipeline] library 00:00:06.518 Loading library shm_lib@master 00:00:06.518 Library shm_lib@master is cached. Copying from home. 00:00:06.529 [Pipeline] node 00:00:06.535 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.536 [Pipeline] { 00:00:06.546 [Pipeline] catchError 00:00:06.548 [Pipeline] { 00:00:06.559 [Pipeline] wrap 00:00:06.568 [Pipeline] { 00:00:06.576 [Pipeline] stage 00:00:06.577 [Pipeline] { (Prologue) 00:00:06.785 [Pipeline] sh 00:00:07.069 + logger -p user.info -t JENKINS-CI 00:00:07.088 [Pipeline] echo 00:00:07.089 Node: CYP9 00:00:07.097 [Pipeline] sh 00:00:07.494 [Pipeline] setCustomBuildProperty 00:00:07.505 [Pipeline] echo 00:00:07.507 Cleanup processes 00:00:07.513 [Pipeline] sh 00:00:07.802 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.802 272794 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.816 [Pipeline] sh 00:00:08.102 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.102 ++ grep -v 'sudo pgrep' 00:00:08.102 ++ awk '{print $1}' 00:00:08.102 + sudo kill -9 00:00:08.102 + true 00:00:08.119 [Pipeline] cleanWs 00:00:08.129 [WS-CLEANUP] Deleting project workspace... 00:00:08.129 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.136 [WS-CLEANUP] done 00:00:08.140 [Pipeline] setCustomBuildProperty 00:00:08.155 [Pipeline] sh 00:00:08.440 + sudo git config --global --replace-all safe.directory '*' 00:00:08.542 [Pipeline] httpRequest 00:00:08.879 [Pipeline] echo 00:00:08.881 Sorcerer 10.211.164.20 is alive 00:00:08.890 [Pipeline] retry 00:00:08.892 [Pipeline] { 00:00:08.904 [Pipeline] httpRequest 00:00:08.908 HttpMethod: GET 00:00:08.909 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.909 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.918 Response Code: HTTP/1.1 200 OK 00:00:08.919 Success: Status code 200 is in the accepted range: 200,404 00:00:08.919 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.965 [Pipeline] } 00:00:20.985 [Pipeline] // retry 00:00:20.993 [Pipeline] sh 00:00:21.281 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.299 [Pipeline] httpRequest 00:00:21.727 [Pipeline] echo 00:00:21.728 Sorcerer 10.211.164.20 is alive 00:00:21.738 [Pipeline] retry 00:00:21.740 [Pipeline] { 00:00:21.754 [Pipeline] httpRequest 00:00:21.759 HttpMethod: GET 00:00:21.759 URL: http://10.211.164.20/packages/spdk_32c3f377ce03bec5d9f2580eb20bb1c8c0c1d06c.tar.gz 00:00:21.760 Sending request to url: http://10.211.164.20/packages/spdk_32c3f377ce03bec5d9f2580eb20bb1c8c0c1d06c.tar.gz 00:00:21.770 Response Code: HTTP/1.1 200 OK 00:00:21.771 Success: Status code 200 is in the accepted range: 200,404 00:00:21.771 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_32c3f377ce03bec5d9f2580eb20bb1c8c0c1d06c.tar.gz 00:02:01.387 [Pipeline] } 00:02:01.403 [Pipeline] // retry 00:02:01.410 [Pipeline] sh 00:02:01.698 + tar --no-same-owner -xf spdk_32c3f377ce03bec5d9f2580eb20bb1c8c0c1d06c.tar.gz 00:02:05.016 [Pipeline] sh 00:02:05.307 + git -C spdk log --oneline -n5 00:02:05.307 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:02:05.307 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:02:05.307 b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:02:05.307 3bdf5e874 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:02:05.307 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:05.320 [Pipeline] } 00:02:05.334 [Pipeline] // stage 00:02:05.344 [Pipeline] stage 00:02:05.347 [Pipeline] { (Prepare) 00:02:05.365 [Pipeline] writeFile 00:02:05.382 [Pipeline] sh 00:02:05.671 + logger -p user.info -t JENKINS-CI 00:02:05.685 [Pipeline] sh 00:02:05.971 + logger -p user.info -t JENKINS-CI 00:02:05.985 [Pipeline] sh 00:02:06.273 + cat autorun-spdk.conf 00:02:06.274 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.274 SPDK_TEST_NVMF=1 00:02:06.274 SPDK_TEST_NVME_CLI=1 00:02:06.274 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.274 SPDK_TEST_NVMF_NICS=e810 00:02:06.274 SPDK_TEST_VFIOUSER=1 00:02:06.274 SPDK_RUN_UBSAN=1 00:02:06.274 NET_TYPE=phy 00:02:06.282 RUN_NIGHTLY=0 00:02:06.288 [Pipeline] readFile 00:02:06.313 [Pipeline] withEnv 00:02:06.316 [Pipeline] { 00:02:06.329 [Pipeline] sh 00:02:06.620 + set -ex 00:02:06.620 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:06.620 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:06.620 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.620 ++ SPDK_TEST_NVMF=1 00:02:06.620 ++ SPDK_TEST_NVME_CLI=1 00:02:06.620 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.620 ++ SPDK_TEST_NVMF_NICS=e810 00:02:06.620 ++ SPDK_TEST_VFIOUSER=1 00:02:06.620 ++ SPDK_RUN_UBSAN=1 00:02:06.620 ++ NET_TYPE=phy 00:02:06.620 ++ RUN_NIGHTLY=0 00:02:06.620 + case $SPDK_TEST_NVMF_NICS in 00:02:06.620 + DRIVERS=ice 00:02:06.620 + [[ tcp == \r\d\m\a ]] 00:02:06.620 + [[ -n ice ]] 00:02:06.620 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:06.620 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:06.620 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:06.620 rmmod: ERROR: Module irdma is not currently loaded 00:02:06.620 rmmod: ERROR: Module i40iw is not currently loaded 00:02:06.620 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:06.620 + true 00:02:06.620 + for D in $DRIVERS 00:02:06.620 + sudo modprobe ice 00:02:06.620 + exit 0 00:02:06.631 [Pipeline] } 00:02:06.646 [Pipeline] // withEnv 00:02:06.651 [Pipeline] } 00:02:06.665 [Pipeline] // stage 00:02:06.674 [Pipeline] catchError 00:02:06.676 [Pipeline] { 00:02:06.689 [Pipeline] timeout 00:02:06.689 Timeout set to expire in 1 hr 0 min 00:02:06.691 [Pipeline] { 00:02:06.704 [Pipeline] stage 00:02:06.706 [Pipeline] { (Tests) 00:02:06.720 [Pipeline] sh 00:02:07.012 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.012 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.012 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.012 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:07.012 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.012 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:07.012 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:07.012 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:07.012 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:07.012 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:07.012 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:07.012 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.012 + source /etc/os-release 00:02:07.012 ++ NAME='Fedora Linux' 00:02:07.012 ++ VERSION='39 (Cloud Edition)' 00:02:07.012 ++ ID=fedora 00:02:07.012 ++ VERSION_ID=39 00:02:07.012 ++ VERSION_CODENAME= 00:02:07.012 ++ PLATFORM_ID=platform:f39 00:02:07.012 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.012 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.012 ++ LOGO=fedora-logo-icon 00:02:07.012 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.012 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.012 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.012 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.012 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.012 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.012 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.012 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.012 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.012 ++ SUPPORT_END=2024-11-12 00:02:07.012 ++ VARIANT='Cloud Edition' 00:02:07.012 ++ VARIANT_ID=cloud 00:02:07.012 + uname -a 00:02:07.012 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.012 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:10.324 Hugepages 00:02:10.324 node hugesize free / total 00:02:10.324 node0 1048576kB 0 / 0 00:02:10.324 node0 2048kB 0 / 0 00:02:10.324 node1 1048576kB 0 / 0 00:02:10.324 node1 2048kB 0 / 0 00:02:10.324 00:02:10.324 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.324 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:10.324 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:10.325 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:10.331 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:10.331 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:10.331 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:10.332 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:10.332 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:10.332 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:10.332 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:10.332 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:10.332 + rm -f /tmp/spdk-ld-path 00:02:10.332 + source autorun-spdk.conf 00:02:10.332 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.332 ++ SPDK_TEST_NVMF=1 00:02:10.332 ++ SPDK_TEST_NVME_CLI=1 00:02:10.332 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.332 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.332 ++ SPDK_TEST_VFIOUSER=1 00:02:10.332 ++ SPDK_RUN_UBSAN=1 00:02:10.332 ++ NET_TYPE=phy 00:02:10.332 ++ RUN_NIGHTLY=0 00:02:10.332 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.332 + [[ -n '' ]] 00:02:10.332 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.332 + for M in /var/spdk/build-*-manifest.txt 00:02:10.332 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.332 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.332 + for M in /var/spdk/build-*-manifest.txt 00:02:10.332 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.332 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.332 + for M in /var/spdk/build-*-manifest.txt 00:02:10.332 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.332 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.332 ++ uname 00:02:10.332 + [[ Linux == \L\i\n\u\x ]] 00:02:10.332 + sudo dmesg -T 00:02:10.332 + sudo dmesg --clear 00:02:10.332 + dmesg_pid=273768 00:02:10.332 + [[ Fedora Linux == FreeBSD ]] 00:02:10.332 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.332 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.332 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.332 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:10.332 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:10.332 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.333 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.333 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.333 + sudo dmesg -Tw 00:02:10.333 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.333 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.333 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.333 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.333 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.333 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.333 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.333 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.333 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.333 15:11:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:10.333 15:11:59 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:10.333 15:11:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:10.333 15:11:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:10.333 15:11:59 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.600 15:11:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:10.600 15:11:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.600 15:11:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.600 15:11:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.600 15:11:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.600 15:11:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.600 15:11:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.600 15:11:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.600 15:11:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.600 15:11:59 -- paths/export.sh@5 -- $ export PATH 00:02:10.600 15:11:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.600 15:11:59 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.600 15:11:59 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:10.600 15:11:59 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732111919.XXXXXX 00:02:10.600 15:11:59 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732111919.JmdGj4 00:02:10.600 15:11:59 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:10.600 15:11:59 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:10.600 15:11:59 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:10.600 15:11:59 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.600 15:11:59 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.600 15:11:59 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:10.600 15:11:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:10.600 15:11:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.600 15:11:59 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:10.600 15:11:59 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:10.600 15:11:59 -- pm/common@17 -- $ local monitor 00:02:10.600 15:11:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.600 15:11:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.600 15:11:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.600 15:11:59 -- pm/common@21 -- $ date +%s 00:02:10.600 15:11:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.600 15:11:59 -- pm/common@21 -- $ date +%s 00:02:10.600 15:11:59 -- pm/common@25 -- $ sleep 1 00:02:10.600 15:11:59 -- pm/common@21 -- $ date +%s 00:02:10.600 15:11:59 -- pm/common@21 -- $ date +%s 00:02:10.600 15:11:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111919 00:02:10.600 15:11:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111919 00:02:10.600 15:11:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111919 00:02:10.600 15:11:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111919 00:02:10.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111919_collect-vmstat.pm.log 00:02:10.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111919_collect-cpu-load.pm.log 00:02:10.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111919_collect-cpu-temp.pm.log 00:02:10.600 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111919_collect-bmc-pm.bmc.pm.log 00:02:11.544 15:12:00 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:11.544 15:12:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.544 15:12:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.544 15:12:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.544 15:12:00 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.544 Wed Nov 20 02:12:00 PM UTC 2024 00:02:11.544 15:12:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.544 v25.01-pre-223-g32c3f377c 00:02:11.544 15:12:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.544 15:12:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.544 15:12:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.544 15:12:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.544 15:12:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.544 15:12:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.544 ************************************ 00:02:11.544 START TEST ubsan 00:02:11.544 ************************************ 00:02:11.544 15:12:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:11.544 using ubsan 00:02:11.545 00:02:11.545 real 0m0.001s 00:02:11.545 user 0m0.000s 00:02:11.545 sys 0m0.000s 00:02:11.545 15:12:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:11.545 15:12:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.545 ************************************ 00:02:11.545 END TEST ubsan 00:02:11.545 ************************************ 00:02:11.545 15:12:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.545 15:12:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.545 15:12:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.545 15:12:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.545 15:12:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.545 15:12:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.545 15:12:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.545 15:12:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.545 15:12:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:11.806 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:11.806 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:12.067 Using 'verbs' RDMA provider 00:02:27.943 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:40.174 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:41.007 Creating mk/config.mk...done. 00:02:41.007 Creating mk/cc.flags.mk...done. 00:02:41.007 Type 'make' to build. 00:02:41.007 15:12:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:41.007 15:12:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:41.007 15:12:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:41.007 15:12:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.007 ************************************ 00:02:41.007 START TEST make 00:02:41.007 ************************************ 00:02:41.007 15:12:29 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:41.269 make[1]: Nothing to be done for 'all'. 00:02:43.186 The Meson build system 00:02:43.186 Version: 1.5.0 00:02:43.186 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:43.186 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:43.186 Build type: native build 00:02:43.187 Project name: libvfio-user 00:02:43.187 Project version: 0.0.1 00:02:43.187 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.187 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.187 Host machine cpu family: x86_64 00:02:43.187 Host machine cpu: x86_64 00:02:43.187 Run-time dependency threads found: YES 00:02:43.187 Library dl found: YES 00:02:43.187 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.187 Run-time dependency json-c found: YES 0.17 00:02:43.187 Run-time dependency cmocka found: YES 1.1.7 00:02:43.187 Program pytest-3 found: NO 00:02:43.187 Program flake8 found: NO 00:02:43.187 Program misspell-fixer found: NO 00:02:43.187 Program restructuredtext-lint found: NO 00:02:43.187 Program valgrind found: YES (/usr/bin/valgrind) 00:02:43.187 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.187 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.187 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.187 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:43.187 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:43.187 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:43.187 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:43.187 Build targets in project: 8 00:02:43.187 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:43.187 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:43.187 00:02:43.187 libvfio-user 0.0.1 00:02:43.187 00:02:43.187 User defined options 00:02:43.187 buildtype : debug 00:02:43.187 default_library: shared 00:02:43.187 libdir : /usr/local/lib 00:02:43.187 00:02:43.187 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.449 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:43.449 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:43.449 [2/37] Compiling C object samples/null.p/null.c.o 00:02:43.449 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:43.449 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:43.449 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:43.449 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:43.449 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:43.449 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:43.449 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:43.449 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:43.449 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:43.449 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:43.449 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:43.449 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:43.449 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:43.449 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:43.449 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:43.449 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:43.449 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:43.449 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:43.449 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:43.449 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:43.449 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:43.449 [24/37] Compiling C object samples/server.p/server.c.o 00:02:43.449 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:43.449 [26/37] Compiling C object samples/client.p/client.c.o 00:02:43.449 [27/37] Linking target samples/client 00:02:43.711 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:43.711 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:43.711 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:43.711 [31/37] Linking target test/unit_tests 00:02:43.711 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:43.711 [33/37] Linking target samples/server 00:02:43.711 [34/37] Linking target samples/gpio-pci-idio-16 00:02:43.711 [35/37] Linking target samples/null 00:02:43.711 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:43.711 [37/37] Linking target samples/lspci 00:02:43.711 INFO: autodetecting backend as ninja 00:02:43.711 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:43.972 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.233 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:44.233 ninja: no work to do. 00:02:50.824 The Meson build system 00:02:50.824 Version: 1.5.0 00:02:50.824 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:50.824 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:50.824 Build type: native build 00:02:50.824 Program cat found: YES (/usr/bin/cat) 00:02:50.824 Project name: DPDK 00:02:50.824 Project version: 24.03.0 00:02:50.824 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:50.824 C linker for the host machine: cc ld.bfd 2.40-14 00:02:50.824 Host machine cpu family: x86_64 00:02:50.824 Host machine cpu: x86_64 00:02:50.824 Message: ## Building in Developer Mode ## 00:02:50.824 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.824 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.824 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.824 Program python3 found: YES (/usr/bin/python3) 00:02:50.824 Program cat found: YES (/usr/bin/cat) 00:02:50.824 Compiler for C supports arguments -march=native: YES 00:02:50.824 Checking for size of "void *" : 8 00:02:50.824 Checking for size of "void *" : 8 (cached) 00:02:50.824 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:50.824 Library m found: YES 00:02:50.824 Library numa found: YES 00:02:50.824 Has header "numaif.h" : YES 00:02:50.824 Library fdt found: NO 00:02:50.824 Library execinfo found: NO 00:02:50.824 Has header "execinfo.h" : YES 00:02:50.824 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:50.824 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.824 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.824 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.824 Run-time dependency openssl found: YES 3.1.1 00:02:50.824 Run-time dependency libpcap found: YES 1.10.4 00:02:50.824 Has header "pcap.h" with dependency libpcap: YES 00:02:50.824 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.824 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.824 Compiler for C supports arguments -Wformat: YES 00:02:50.824 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.824 Compiler for C supports arguments -Wformat-security: NO 00:02:50.824 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.824 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.824 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.824 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.824 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.824 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.824 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.824 Compiler for C supports arguments -Wundef: YES 00:02:50.824 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.824 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.824 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.824 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.824 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.824 Program objdump found: YES (/usr/bin/objdump) 00:02:50.824 Compiler for C supports arguments -mavx512f: YES 00:02:50.824 Checking if "AVX512 checking" compiles: YES 00:02:50.824 Fetching value of define "__SSE4_2__" : 1 00:02:50.824 Fetching value of define "__AES__" : 1 00:02:50.824 Fetching value of define "__AVX__" : 1 00:02:50.824 Fetching value of define "__AVX2__" : 1 00:02:50.824 Fetching value of define "__AVX512BW__" : 1 00:02:50.824 Fetching value of define "__AVX512CD__" : 1 00:02:50.824 Fetching value of define "__AVX512DQ__" : 1 00:02:50.824 Fetching value of define "__AVX512F__" : 1 00:02:50.824 Fetching value of define "__AVX512VL__" : 1 00:02:50.824 Fetching value of define "__PCLMUL__" : 1 00:02:50.824 Fetching value of define "__RDRND__" : 1 00:02:50.824 Fetching value of define "__RDSEED__" : 1 00:02:50.824 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:50.824 Fetching value of define "__znver1__" : (undefined) 00:02:50.824 Fetching value of define "__znver2__" : (undefined) 00:02:50.824 Fetching value of define "__znver3__" : (undefined) 00:02:50.824 Fetching value of define "__znver4__" : (undefined) 00:02:50.824 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.824 Message: lib/log: Defining dependency "log" 00:02:50.824 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.824 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.824 Checking for function "getentropy" : NO 00:02:50.824 Message: lib/eal: Defining dependency "eal" 00:02:50.824 Message: lib/ring: Defining dependency "ring" 00:02:50.824 Message: lib/rcu: Defining dependency "rcu" 00:02:50.824 Message: lib/mempool: Defining dependency "mempool" 00:02:50.824 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.824 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.824 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.824 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:50.824 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:50.824 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:50.824 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:50.824 Compiler for C supports arguments -mpclmul: YES 00:02:50.824 Compiler for C supports arguments -maes: YES 00:02:50.824 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.824 Compiler for C supports arguments -mavx512bw: YES 00:02:50.824 Compiler for C supports arguments -mavx512dq: YES 00:02:50.824 Compiler for C supports arguments -mavx512vl: YES 00:02:50.824 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.824 Compiler for C supports arguments -mavx2: YES 00:02:50.824 Compiler for C supports arguments -mavx: YES 00:02:50.824 Message: lib/net: Defining dependency "net" 00:02:50.824 Message: lib/meter: Defining dependency "meter" 00:02:50.824 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.825 Message: lib/pci: Defining dependency "pci" 00:02:50.825 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.825 Message: lib/hash: Defining dependency "hash" 00:02:50.825 Message: lib/timer: Defining dependency "timer" 00:02:50.825 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.825 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.825 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.825 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.825 Message: lib/power: Defining dependency "power" 00:02:50.825 Message: lib/reorder: Defining dependency "reorder" 00:02:50.825 Message: lib/security: Defining dependency "security" 00:02:50.825 Has header "linux/userfaultfd.h" : YES 00:02:50.825 Has header "linux/vduse.h" : YES 00:02:50.825 Message: lib/vhost: Defining dependency "vhost" 00:02:50.825 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.825 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.825 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.825 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.825 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.825 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.825 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.825 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.825 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.825 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.825 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:50.825 Configuring doxy-api-html.conf using configuration 00:02:50.825 Configuring doxy-api-man.conf using configuration 00:02:50.825 Program mandb found: YES (/usr/bin/mandb) 00:02:50.825 Program sphinx-build found: NO 00:02:50.825 Configuring rte_build_config.h using configuration 00:02:50.825 Message: 00:02:50.825 ================= 00:02:50.825 Applications Enabled 00:02:50.825 ================= 00:02:50.825 00:02:50.825 apps: 00:02:50.825 00:02:50.825 00:02:50.825 Message: 00:02:50.825 ================= 00:02:50.825 Libraries Enabled 00:02:50.825 ================= 00:02:50.825 00:02:50.825 libs: 00:02:50.825 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.825 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.825 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.825 00:02:50.825 Message: 00:02:50.825 =============== 00:02:50.825 Drivers Enabled 00:02:50.825 =============== 00:02:50.825 00:02:50.825 common: 00:02:50.825 00:02:50.825 bus: 00:02:50.825 pci, vdev, 00:02:50.825 mempool: 00:02:50.825 ring, 00:02:50.825 dma: 00:02:50.825 00:02:50.825 net: 00:02:50.825 00:02:50.825 crypto: 00:02:50.825 00:02:50.825 compress: 00:02:50.825 00:02:50.825 vdpa: 00:02:50.825 00:02:50.825 00:02:50.825 Message: 00:02:50.825 ================= 00:02:50.825 Content Skipped 00:02:50.825 ================= 00:02:50.825 00:02:50.825 apps: 00:02:50.825 dumpcap: explicitly disabled via build config 00:02:50.825 graph: explicitly disabled via build config 00:02:50.825 pdump: explicitly disabled via build config 00:02:50.825 proc-info: explicitly disabled via build config 00:02:50.825 test-acl: explicitly disabled via build config 00:02:50.825 test-bbdev: explicitly disabled via build config 00:02:50.825 test-cmdline: explicitly disabled via build config 00:02:50.825 test-compress-perf: explicitly disabled via build config 00:02:50.825 test-crypto-perf: explicitly disabled via build config 00:02:50.825 test-dma-perf: explicitly disabled via build config 00:02:50.825 test-eventdev: explicitly disabled via build config 00:02:50.825 test-fib: explicitly disabled via build config 00:02:50.825 test-flow-perf: explicitly disabled via build config 00:02:50.825 test-gpudev: explicitly disabled via build config 00:02:50.825 test-mldev: explicitly disabled via build config 00:02:50.825 test-pipeline: explicitly disabled via build config 00:02:50.825 test-pmd: explicitly disabled via build config 00:02:50.825 test-regex: explicitly disabled via build config 00:02:50.825 test-sad: explicitly disabled via build config 00:02:50.825 test-security-perf: explicitly disabled via build config 00:02:50.825 00:02:50.825 libs: 00:02:50.825 argparse: explicitly disabled via build config 00:02:50.825 metrics: explicitly disabled via build config 00:02:50.825 acl: explicitly disabled via build config 00:02:50.825 bbdev: explicitly disabled via build config 00:02:50.825 bitratestats: explicitly disabled via build config 00:02:50.825 bpf: explicitly disabled via build config 00:02:50.825 cfgfile: explicitly disabled via build config 00:02:50.825 distributor: explicitly disabled via build config 00:02:50.825 efd: explicitly disabled via build config 00:02:50.825 eventdev: explicitly disabled via build config 00:02:50.825 dispatcher: explicitly disabled via build config 00:02:50.825 gpudev: explicitly disabled via build config 00:02:50.825 gro: explicitly disabled via build config 00:02:50.825 gso: explicitly disabled via build config 00:02:50.825 ip_frag: explicitly disabled via build config 00:02:50.825 jobstats: explicitly disabled via build config 00:02:50.825 latencystats: explicitly disabled via build config 00:02:50.825 lpm: explicitly disabled via build config 00:02:50.825 member: explicitly disabled via build config 00:02:50.825 pcapng: explicitly disabled via build config 00:02:50.825 rawdev: explicitly disabled via build config 00:02:50.825 regexdev: explicitly disabled via build config 00:02:50.825 mldev: explicitly disabled via build config 00:02:50.825 rib: explicitly disabled via build config 00:02:50.825 sched: explicitly disabled via build config 00:02:50.825 stack: explicitly disabled via build config 00:02:50.825 ipsec: explicitly disabled via build config 00:02:50.825 pdcp: explicitly disabled via build config 00:02:50.825 fib: explicitly disabled via build config 00:02:50.825 port: explicitly disabled via build config 00:02:50.825 pdump: explicitly disabled via build config 00:02:50.825 table: explicitly disabled via build config 00:02:50.825 pipeline: explicitly disabled via build config 00:02:50.825 graph: explicitly disabled via build config 00:02:50.825 node: explicitly disabled via build config 00:02:50.825 00:02:50.825 drivers: 00:02:50.825 common/cpt: not in enabled drivers build config 00:02:50.825 common/dpaax: not in enabled drivers build config 00:02:50.825 common/iavf: not in enabled drivers build config 00:02:50.825 common/idpf: not in enabled drivers build config 00:02:50.825 common/ionic: not in enabled drivers build config 00:02:50.825 common/mvep: not in enabled drivers build config 00:02:50.825 common/octeontx: not in enabled drivers build config 00:02:50.825 bus/auxiliary: not in enabled drivers build config 00:02:50.825 bus/cdx: not in enabled drivers build config 00:02:50.825 bus/dpaa: not in enabled drivers build config 00:02:50.825 bus/fslmc: not in enabled drivers build config 00:02:50.825 bus/ifpga: not in enabled drivers build config 00:02:50.825 bus/platform: not in enabled drivers build config 00:02:50.825 bus/uacce: not in enabled drivers build config 00:02:50.825 bus/vmbus: not in enabled drivers build config 00:02:50.825 common/cnxk: not in enabled drivers build config 00:02:50.825 common/mlx5: not in enabled drivers build config 00:02:50.825 common/nfp: not in enabled drivers build config 00:02:50.825 common/nitrox: not in enabled drivers build config 00:02:50.825 common/qat: not in enabled drivers build config 00:02:50.825 common/sfc_efx: not in enabled drivers build config 00:02:50.825 mempool/bucket: not in enabled drivers build config 00:02:50.825 mempool/cnxk: not in enabled drivers build config 00:02:50.825 mempool/dpaa: not in enabled drivers build config 00:02:50.825 mempool/dpaa2: not in enabled drivers build config 00:02:50.825 mempool/octeontx: not in enabled drivers build config 00:02:50.825 mempool/stack: not in enabled drivers build config 00:02:50.825 dma/cnxk: not in enabled drivers build config 00:02:50.825 dma/dpaa: not in enabled drivers build config 00:02:50.825 dma/dpaa2: not in enabled drivers build config 00:02:50.825 dma/hisilicon: not in enabled drivers build config 00:02:50.825 dma/idxd: not in enabled drivers build config 00:02:50.825 dma/ioat: not in enabled drivers build config 00:02:50.825 dma/skeleton: not in enabled drivers build config 00:02:50.825 net/af_packet: not in enabled drivers build config 00:02:50.825 net/af_xdp: not in enabled drivers build config 00:02:50.825 net/ark: not in enabled drivers build config 00:02:50.825 net/atlantic: not in enabled drivers build config 00:02:50.825 net/avp: not in enabled drivers build config 00:02:50.825 net/axgbe: not in enabled drivers build config 00:02:50.825 net/bnx2x: not in enabled drivers build config 00:02:50.825 net/bnxt: not in enabled drivers build config 00:02:50.825 net/bonding: not in enabled drivers build config 00:02:50.825 net/cnxk: not in enabled drivers build config 00:02:50.825 net/cpfl: not in enabled drivers build config 00:02:50.825 net/cxgbe: not in enabled drivers build config 00:02:50.825 net/dpaa: not in enabled drivers build config 00:02:50.825 net/dpaa2: not in enabled drivers build config 00:02:50.825 net/e1000: not in enabled drivers build config 00:02:50.825 net/ena: not in enabled drivers build config 00:02:50.825 net/enetc: not in enabled drivers build config 00:02:50.825 net/enetfec: not in enabled drivers build config 00:02:50.825 net/enic: not in enabled drivers build config 00:02:50.825 net/failsafe: not in enabled drivers build config 00:02:50.825 net/fm10k: not in enabled drivers build config 00:02:50.825 net/gve: not in enabled drivers build config 00:02:50.825 net/hinic: not in enabled drivers build config 00:02:50.825 net/hns3: not in enabled drivers build config 00:02:50.825 net/i40e: not in enabled drivers build config 00:02:50.825 net/iavf: not in enabled drivers build config 00:02:50.825 net/ice: not in enabled drivers build config 00:02:50.825 net/idpf: not in enabled drivers build config 00:02:50.825 net/igc: not in enabled drivers build config 00:02:50.825 net/ionic: not in enabled drivers build config 00:02:50.825 net/ipn3ke: not in enabled drivers build config 00:02:50.825 net/ixgbe: not in enabled drivers build config 00:02:50.825 net/mana: not in enabled drivers build config 00:02:50.826 net/memif: not in enabled drivers build config 00:02:50.826 net/mlx4: not in enabled drivers build config 00:02:50.826 net/mlx5: not in enabled drivers build config 00:02:50.826 net/mvneta: not in enabled drivers build config 00:02:50.826 net/mvpp2: not in enabled drivers build config 00:02:50.826 net/netvsc: not in enabled drivers build config 00:02:50.826 net/nfb: not in enabled drivers build config 00:02:50.826 net/nfp: not in enabled drivers build config 00:02:50.826 net/ngbe: not in enabled drivers build config 00:02:50.826 net/null: not in enabled drivers build config 00:02:50.826 net/octeontx: not in enabled drivers build config 00:02:50.826 net/octeon_ep: not in enabled drivers build config 00:02:50.826 net/pcap: not in enabled drivers build config 00:02:50.826 net/pfe: not in enabled drivers build config 00:02:50.826 net/qede: not in enabled drivers build config 00:02:50.826 net/ring: not in enabled drivers build config 00:02:50.826 net/sfc: not in enabled drivers build config 00:02:50.826 net/softnic: not in enabled drivers build config 00:02:50.826 net/tap: not in enabled drivers build config 00:02:50.826 net/thunderx: not in enabled drivers build config 00:02:50.826 net/txgbe: not in enabled drivers build config 00:02:50.826 net/vdev_netvsc: not in enabled drivers build config 00:02:50.826 net/vhost: not in enabled drivers build config 00:02:50.826 net/virtio: not in enabled drivers build config 00:02:50.826 net/vmxnet3: not in enabled drivers build config 00:02:50.826 raw/*: missing internal dependency, "rawdev" 00:02:50.826 crypto/armv8: not in enabled drivers build config 00:02:50.826 crypto/bcmfs: not in enabled drivers build config 00:02:50.826 crypto/caam_jr: not in enabled drivers build config 00:02:50.826 crypto/ccp: not in enabled drivers build config 00:02:50.826 crypto/cnxk: not in enabled drivers build config 00:02:50.826 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.826 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.826 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.826 crypto/mlx5: not in enabled drivers build config 00:02:50.826 crypto/mvsam: not in enabled drivers build config 00:02:50.826 crypto/nitrox: not in enabled drivers build config 00:02:50.826 crypto/null: not in enabled drivers build config 00:02:50.826 crypto/octeontx: not in enabled drivers build config 00:02:50.826 crypto/openssl: not in enabled drivers build config 00:02:50.826 crypto/scheduler: not in enabled drivers build config 00:02:50.826 crypto/uadk: not in enabled drivers build config 00:02:50.826 crypto/virtio: not in enabled drivers build config 00:02:50.826 compress/isal: not in enabled drivers build config 00:02:50.826 compress/mlx5: not in enabled drivers build config 00:02:50.826 compress/nitrox: not in enabled drivers build config 00:02:50.826 compress/octeontx: not in enabled drivers build config 00:02:50.826 compress/zlib: not in enabled drivers build config 00:02:50.826 regex/*: missing internal dependency, "regexdev" 00:02:50.826 ml/*: missing internal dependency, "mldev" 00:02:50.826 vdpa/ifc: not in enabled drivers build config 00:02:50.826 vdpa/mlx5: not in enabled drivers build config 00:02:50.826 vdpa/nfp: not in enabled drivers build config 00:02:50.826 vdpa/sfc: not in enabled drivers build config 00:02:50.826 event/*: missing internal dependency, "eventdev" 00:02:50.826 baseband/*: missing internal dependency, "bbdev" 00:02:50.826 gpu/*: missing internal dependency, "gpudev" 00:02:50.826 00:02:50.826 00:02:50.826 Build targets in project: 84 00:02:50.826 00:02:50.826 DPDK 24.03.0 00:02:50.826 00:02:50.826 User defined options 00:02:50.826 buildtype : debug 00:02:50.826 default_library : shared 00:02:50.826 libdir : lib 00:02:50.826 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:50.826 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:50.826 c_link_args : 00:02:50.826 cpu_instruction_set: native 00:02:50.826 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:50.826 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:50.826 enable_docs : false 00:02:50.826 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:50.826 enable_kmods : false 00:02:50.826 max_lcores : 128 00:02:50.826 tests : false 00:02:50.826 00:02:50.826 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.826 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:50.826 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.826 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.826 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.826 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.826 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.826 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.826 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.826 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.826 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.826 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.826 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.826 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.826 [13/267] Linking static target lib/librte_kvargs.a 00:02:50.826 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.826 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.826 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.826 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.826 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.826 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.826 [20/267] Linking static target lib/librte_log.a 00:02:50.826 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.826 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.085 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.085 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.085 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.085 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.085 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.085 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.085 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.085 [30/267] Linking static target lib/librte_pci.a 00:02:51.085 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.085 [32/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.085 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.085 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.085 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.085 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.085 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.085 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.344 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.344 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.344 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.344 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.344 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.344 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:51.344 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:51.344 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.344 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.344 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:51.344 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:51.344 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.344 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.344 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.344 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.344 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:51.344 [55/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:51.344 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.344 [57/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.344 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.344 [59/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.344 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.344 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.344 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.344 [63/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.344 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.345 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.345 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:51.345 [67/267] Linking static target lib/librte_meter.a 00:02:51.345 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:51.345 [69/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.345 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:51.345 [71/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:51.345 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:51.345 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.345 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:51.345 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.345 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.345 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.345 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.345 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.345 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.345 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.345 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:51.345 [83/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.345 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:51.345 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:51.345 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.345 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.345 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.345 [89/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.345 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.345 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.345 [92/267] Linking static target lib/librte_cmdline.a 00:02:51.345 [93/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.345 [94/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.345 [95/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.345 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.345 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.345 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.345 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:51.345 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.345 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:51.345 [102/267] Linking static target lib/librte_ring.a 00:02:51.345 [103/267] Linking static target lib/librte_telemetry.a 00:02:51.345 [104/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.345 [105/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.345 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.345 [107/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.345 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.345 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.345 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.345 [111/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.345 [112/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.345 [113/267] Linking static target lib/librte_timer.a 00:02:51.345 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.345 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.345 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:51.345 [117/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.345 [118/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.345 [119/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.345 [120/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:51.345 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.345 [122/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.345 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.345 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.345 [125/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.606 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:51.606 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.606 [128/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:51.606 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.606 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.606 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.606 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.606 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.606 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.606 [135/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.606 [136/267] Linking static target lib/librte_compressdev.a 00:02:51.606 [137/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.606 [138/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.606 [139/267] Linking static target lib/librte_mempool.a 00:02:51.606 [140/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.606 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.606 [142/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.606 [143/267] Linking static target lib/librte_net.a 00:02:51.606 [144/267] Linking static target lib/librte_power.a 00:02:51.606 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.606 [146/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.606 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.606 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.606 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.606 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.606 [151/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.606 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.606 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.606 [154/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.606 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.606 [156/267] Linking target lib/librte_log.so.24.1 00:02:51.606 [157/267] Linking static target lib/librte_dmadev.a 00:02:51.606 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:51.606 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:51.606 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.606 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.606 [162/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.606 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.606 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.606 [165/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.606 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.606 [167/267] Linking static target lib/librte_rcu.a 00:02:51.606 [168/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:51.606 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.606 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.606 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.606 [172/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.606 [173/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.606 [174/267] Linking static target lib/librte_reorder.a 00:02:51.607 [175/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.607 [176/267] Linking static target lib/librte_security.a 00:02:51.607 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.607 [178/267] Linking static target lib/librte_eal.a 00:02:51.607 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.607 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.607 [181/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.607 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.607 [183/267] Linking static target lib/librte_mbuf.a 00:02:51.607 [184/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.866 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.866 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.866 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.866 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.866 [189/267] Linking static target drivers/librte_bus_vdev.a 00:02:51.866 [190/267] Linking target lib/librte_kvargs.so.24.1 00:02:51.866 [191/267] Linking static target lib/librte_hash.a 00:02:51.866 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.866 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.866 [194/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.866 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.866 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.866 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.866 [198/267] Linking static target drivers/librte_bus_pci.a 00:02:51.866 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.866 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.867 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.867 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.867 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.867 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.867 [205/267] Linking static target drivers/librte_mempool_ring.a 00:02:51.867 [206/267] Linking static target lib/librte_cryptodev.a 00:02:51.867 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.124 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.124 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.124 [210/267] Linking target lib/librte_telemetry.so.24.1 00:02:52.124 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.124 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.124 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.124 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:52.383 [215/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.383 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.383 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.383 [218/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.383 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.383 [220/267] Linking static target lib/librte_ethdev.a 00:02:52.644 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.644 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.644 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.644 [224/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.644 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.904 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.476 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.476 [228/267] Linking static target lib/librte_vhost.a 00:02:54.050 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.969 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.555 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.498 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.498 [233/267] Linking target lib/librte_eal.so.24.1 00:03:03.498 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:03.759 [235/267] Linking target lib/librte_meter.so.24.1 00:03:03.759 [236/267] Linking target lib/librte_ring.so.24.1 00:03:03.759 [237/267] Linking target lib/librte_pci.so.24.1 00:03:03.759 [238/267] Linking target lib/librte_timer.so.24.1 00:03:03.759 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.759 [240/267] Linking target lib/librte_dmadev.so.24.1 00:03:03.759 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.759 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.759 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.759 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.759 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.759 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:03.759 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:03.759 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:04.020 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:04.020 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:04.020 [251/267] Linking target lib/librte_mbuf.so.24.1 00:03:04.020 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.020 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.281 [254/267] Linking target lib/librte_net.so.24.1 00:03:04.281 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:04.281 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:04.281 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:04.281 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.281 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.281 [260/267] Linking target lib/librte_cmdline.so.24.1 00:03:04.281 [261/267] Linking target lib/librte_hash.so.24.1 00:03:04.281 [262/267] Linking target lib/librte_security.so.24.1 00:03:04.281 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:04.543 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.543 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:04.543 [266/267] Linking target lib/librte_power.so.24.1 00:03:04.543 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:04.543 INFO: autodetecting backend as ninja 00:03:04.543 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:07.847 CC lib/log/log.o 00:03:07.847 CC lib/ut_mock/mock.o 00:03:07.847 CC lib/log/log_flags.o 00:03:07.847 CC lib/ut/ut.o 00:03:07.847 CC lib/log/log_deprecated.o 00:03:07.847 LIB libspdk_log.a 00:03:07.847 LIB libspdk_ut.a 00:03:07.847 LIB libspdk_ut_mock.a 00:03:07.847 SO libspdk_ut.so.2.0 00:03:07.847 SO libspdk_log.so.7.1 00:03:07.847 SO libspdk_ut_mock.so.6.0 00:03:07.847 SYMLINK libspdk_ut.so 00:03:07.847 SYMLINK libspdk_ut_mock.so 00:03:07.847 SYMLINK libspdk_log.so 00:03:08.110 CC lib/ioat/ioat.o 00:03:08.110 CC lib/util/base64.o 00:03:08.110 CC lib/util/bit_array.o 00:03:08.110 CC lib/util/cpuset.o 00:03:08.110 CC lib/dma/dma.o 00:03:08.110 CC lib/util/crc16.o 00:03:08.110 CXX lib/trace_parser/trace.o 00:03:08.110 CC lib/util/crc32.o 00:03:08.110 CC lib/util/crc32c.o 00:03:08.110 CC lib/util/crc32_ieee.o 00:03:08.110 CC lib/util/crc64.o 00:03:08.110 CC lib/util/dif.o 00:03:08.110 CC lib/util/fd.o 00:03:08.110 CC lib/util/fd_group.o 00:03:08.110 CC lib/util/file.o 00:03:08.110 CC lib/util/hexlify.o 00:03:08.110 CC lib/util/iov.o 00:03:08.110 CC lib/util/math.o 00:03:08.110 CC lib/util/net.o 00:03:08.110 CC lib/util/pipe.o 00:03:08.110 CC lib/util/strerror_tls.o 00:03:08.371 CC lib/util/string.o 00:03:08.371 CC lib/util/uuid.o 00:03:08.371 CC lib/util/xor.o 00:03:08.371 CC lib/util/zipf.o 00:03:08.371 CC lib/util/md5.o 00:03:08.371 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.371 CC lib/vfio_user/host/vfio_user.o 00:03:08.371 LIB libspdk_dma.a 00:03:08.633 SO libspdk_dma.so.5.0 00:03:08.633 LIB libspdk_ioat.a 00:03:08.633 SO libspdk_ioat.so.7.0 00:03:08.633 SYMLINK libspdk_dma.so 00:03:08.633 SYMLINK libspdk_ioat.so 00:03:08.633 LIB libspdk_vfio_user.a 00:03:08.633 SO libspdk_vfio_user.so.5.0 00:03:08.895 LIB libspdk_util.a 00:03:08.895 SYMLINK libspdk_vfio_user.so 00:03:08.895 SO libspdk_util.so.10.1 00:03:08.895 SYMLINK libspdk_util.so 00:03:09.157 LIB libspdk_trace_parser.a 00:03:09.157 SO libspdk_trace_parser.so.6.0 00:03:09.158 SYMLINK libspdk_trace_parser.so 00:03:09.419 CC lib/conf/conf.o 00:03:09.419 CC lib/vmd/vmd.o 00:03:09.419 CC lib/vmd/led.o 00:03:09.419 CC lib/json/json_parse.o 00:03:09.419 CC lib/json/json_util.o 00:03:09.419 CC lib/rdma_utils/rdma_utils.o 00:03:09.419 CC lib/json/json_write.o 00:03:09.419 CC lib/env_dpdk/env.o 00:03:09.419 CC lib/idxd/idxd.o 00:03:09.419 CC lib/env_dpdk/memory.o 00:03:09.419 CC lib/idxd/idxd_user.o 00:03:09.419 CC lib/env_dpdk/pci.o 00:03:09.419 CC lib/idxd/idxd_kernel.o 00:03:09.419 CC lib/env_dpdk/init.o 00:03:09.419 CC lib/env_dpdk/threads.o 00:03:09.419 CC lib/env_dpdk/pci_ioat.o 00:03:09.419 CC lib/env_dpdk/pci_virtio.o 00:03:09.419 CC lib/env_dpdk/pci_vmd.o 00:03:09.419 CC lib/env_dpdk/pci_idxd.o 00:03:09.419 CC lib/env_dpdk/pci_event.o 00:03:09.419 CC lib/env_dpdk/sigbus_handler.o 00:03:09.419 CC lib/env_dpdk/pci_dpdk.o 00:03:09.419 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:09.419 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.682 LIB libspdk_conf.a 00:03:09.682 LIB libspdk_rdma_utils.a 00:03:09.682 SO libspdk_conf.so.6.0 00:03:09.682 SO libspdk_rdma_utils.so.1.0 00:03:09.682 LIB libspdk_json.a 00:03:09.682 SYMLINK libspdk_conf.so 00:03:09.682 SO libspdk_json.so.6.0 00:03:09.682 SYMLINK libspdk_rdma_utils.so 00:03:09.943 SYMLINK libspdk_json.so 00:03:09.943 LIB libspdk_idxd.a 00:03:09.943 SO libspdk_idxd.so.12.1 00:03:09.943 LIB libspdk_vmd.a 00:03:09.943 SO libspdk_vmd.so.6.0 00:03:09.943 SYMLINK libspdk_idxd.so 00:03:10.205 SYMLINK libspdk_vmd.so 00:03:10.205 CC lib/rdma_provider/common.o 00:03:10.205 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:10.205 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.205 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.205 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.205 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.466 LIB libspdk_rdma_provider.a 00:03:10.466 SO libspdk_rdma_provider.so.7.0 00:03:10.466 LIB libspdk_jsonrpc.a 00:03:10.466 SYMLINK libspdk_rdma_provider.so 00:03:10.466 SO libspdk_jsonrpc.so.6.0 00:03:10.466 SYMLINK libspdk_jsonrpc.so 00:03:10.728 LIB libspdk_env_dpdk.a 00:03:10.728 SO libspdk_env_dpdk.so.15.1 00:03:10.989 SYMLINK libspdk_env_dpdk.so 00:03:10.989 CC lib/rpc/rpc.o 00:03:11.251 LIB libspdk_rpc.a 00:03:11.251 SO libspdk_rpc.so.6.0 00:03:11.251 SYMLINK libspdk_rpc.so 00:03:11.512 CC lib/trace/trace.o 00:03:11.512 CC lib/trace/trace_flags.o 00:03:11.512 CC lib/trace/trace_rpc.o 00:03:11.512 CC lib/notify/notify.o 00:03:11.512 CC lib/notify/notify_rpc.o 00:03:11.512 CC lib/keyring/keyring.o 00:03:11.512 CC lib/keyring/keyring_rpc.o 00:03:11.774 LIB libspdk_notify.a 00:03:11.774 SO libspdk_notify.so.6.0 00:03:11.774 LIB libspdk_keyring.a 00:03:11.774 LIB libspdk_trace.a 00:03:12.035 SYMLINK libspdk_notify.so 00:03:12.035 SO libspdk_keyring.so.2.0 00:03:12.035 SO libspdk_trace.so.11.0 00:03:12.035 SYMLINK libspdk_keyring.so 00:03:12.035 SYMLINK libspdk_trace.so 00:03:12.296 CC lib/thread/thread.o 00:03:12.296 CC lib/sock/sock.o 00:03:12.296 CC lib/sock/sock_rpc.o 00:03:12.296 CC lib/thread/iobuf.o 00:03:12.873 LIB libspdk_sock.a 00:03:12.873 SO libspdk_sock.so.10.0 00:03:12.873 SYMLINK libspdk_sock.so 00:03:13.135 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.135 CC lib/nvme/nvme_ctrlr.o 00:03:13.135 CC lib/nvme/nvme_fabric.o 00:03:13.135 CC lib/nvme/nvme_ns_cmd.o 00:03:13.135 CC lib/nvme/nvme_ns.o 00:03:13.135 CC lib/nvme/nvme_pcie_common.o 00:03:13.135 CC lib/nvme/nvme_pcie.o 00:03:13.135 CC lib/nvme/nvme_qpair.o 00:03:13.135 CC lib/nvme/nvme.o 00:03:13.135 CC lib/nvme/nvme_quirks.o 00:03:13.135 CC lib/nvme/nvme_transport.o 00:03:13.135 CC lib/nvme/nvme_discovery.o 00:03:13.135 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.135 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:13.135 CC lib/nvme/nvme_tcp.o 00:03:13.135 CC lib/nvme/nvme_opal.o 00:03:13.135 CC lib/nvme/nvme_io_msg.o 00:03:13.135 CC lib/nvme/nvme_poll_group.o 00:03:13.135 CC lib/nvme/nvme_zns.o 00:03:13.135 CC lib/nvme/nvme_stubs.o 00:03:13.135 CC lib/nvme/nvme_auth.o 00:03:13.135 CC lib/nvme/nvme_cuse.o 00:03:13.135 CC lib/nvme/nvme_vfio_user.o 00:03:13.135 CC lib/nvme/nvme_rdma.o 00:03:13.706 LIB libspdk_thread.a 00:03:13.706 SO libspdk_thread.so.11.0 00:03:13.706 SYMLINK libspdk_thread.so 00:03:14.278 CC lib/accel/accel.o 00:03:14.278 CC lib/accel/accel_rpc.o 00:03:14.278 CC lib/accel/accel_sw.o 00:03:14.278 CC lib/init/json_config.o 00:03:14.278 CC lib/init/subsystem.o 00:03:14.278 CC lib/init/subsystem_rpc.o 00:03:14.278 CC lib/init/rpc.o 00:03:14.278 CC lib/fsdev/fsdev.o 00:03:14.278 CC lib/vfu_tgt/tgt_endpoint.o 00:03:14.278 CC lib/fsdev/fsdev_io.o 00:03:14.278 CC lib/vfu_tgt/tgt_rpc.o 00:03:14.278 CC lib/fsdev/fsdev_rpc.o 00:03:14.279 CC lib/virtio/virtio.o 00:03:14.279 CC lib/virtio/virtio_vhost_user.o 00:03:14.279 CC lib/blob/blobstore.o 00:03:14.279 CC lib/virtio/virtio_vfio_user.o 00:03:14.279 CC lib/blob/request.o 00:03:14.279 CC lib/virtio/virtio_pci.o 00:03:14.279 CC lib/blob/zeroes.o 00:03:14.279 CC lib/blob/blob_bs_dev.o 00:03:14.279 LIB libspdk_init.a 00:03:14.279 SO libspdk_init.so.6.0 00:03:14.539 LIB libspdk_vfu_tgt.a 00:03:14.539 LIB libspdk_virtio.a 00:03:14.539 SYMLINK libspdk_init.so 00:03:14.539 SO libspdk_virtio.so.7.0 00:03:14.539 SO libspdk_vfu_tgt.so.3.0 00:03:14.539 SYMLINK libspdk_vfu_tgt.so 00:03:14.539 SYMLINK libspdk_virtio.so 00:03:14.800 LIB libspdk_fsdev.a 00:03:14.800 SO libspdk_fsdev.so.2.0 00:03:14.800 SYMLINK libspdk_fsdev.so 00:03:14.800 CC lib/event/app.o 00:03:14.800 CC lib/event/reactor.o 00:03:14.800 CC lib/event/log_rpc.o 00:03:14.800 CC lib/event/app_rpc.o 00:03:14.800 CC lib/event/scheduler_static.o 00:03:15.061 LIB libspdk_accel.a 00:03:15.061 SO libspdk_accel.so.16.0 00:03:15.061 LIB libspdk_nvme.a 00:03:15.061 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:15.322 SYMLINK libspdk_accel.so 00:03:15.322 LIB libspdk_event.a 00:03:15.322 SO libspdk_nvme.so.15.0 00:03:15.322 SO libspdk_event.so.14.0 00:03:15.322 SYMLINK libspdk_event.so 00:03:15.584 SYMLINK libspdk_nvme.so 00:03:15.584 CC lib/bdev/bdev.o 00:03:15.584 CC lib/bdev/bdev_rpc.o 00:03:15.584 CC lib/bdev/bdev_zone.o 00:03:15.584 CC lib/bdev/part.o 00:03:15.584 CC lib/bdev/scsi_nvme.o 00:03:15.845 LIB libspdk_fuse_dispatcher.a 00:03:15.845 SO libspdk_fuse_dispatcher.so.1.0 00:03:15.845 SYMLINK libspdk_fuse_dispatcher.so 00:03:16.787 LIB libspdk_blob.a 00:03:16.787 SO libspdk_blob.so.11.0 00:03:17.049 SYMLINK libspdk_blob.so 00:03:17.310 CC lib/blobfs/blobfs.o 00:03:17.310 CC lib/blobfs/tree.o 00:03:17.310 CC lib/lvol/lvol.o 00:03:17.881 LIB libspdk_bdev.a 00:03:17.881 SO libspdk_bdev.so.17.0 00:03:18.143 LIB libspdk_blobfs.a 00:03:18.143 SYMLINK libspdk_bdev.so 00:03:18.143 SO libspdk_blobfs.so.10.0 00:03:18.143 LIB libspdk_lvol.a 00:03:18.143 SYMLINK libspdk_blobfs.so 00:03:18.143 SO libspdk_lvol.so.10.0 00:03:18.143 SYMLINK libspdk_lvol.so 00:03:18.403 CC lib/scsi/dev.o 00:03:18.404 CC lib/scsi/lun.o 00:03:18.404 CC lib/scsi/port.o 00:03:18.404 CC lib/scsi/scsi.o 00:03:18.404 CC lib/scsi/scsi_bdev.o 00:03:18.404 CC lib/scsi/scsi_pr.o 00:03:18.404 CC lib/nvmf/ctrlr.o 00:03:18.404 CC lib/nbd/nbd.o 00:03:18.404 CC lib/scsi/scsi_rpc.o 00:03:18.404 CC lib/nvmf/ctrlr_discovery.o 00:03:18.404 CC lib/scsi/task.o 00:03:18.404 CC lib/nvmf/ctrlr_bdev.o 00:03:18.404 CC lib/nbd/nbd_rpc.o 00:03:18.404 CC lib/nvmf/subsystem.o 00:03:18.404 CC lib/nvmf/nvmf.o 00:03:18.404 CC lib/ublk/ublk.o 00:03:18.404 CC lib/ftl/ftl_core.o 00:03:18.404 CC lib/nvmf/nvmf_rpc.o 00:03:18.404 CC lib/ublk/ublk_rpc.o 00:03:18.404 CC lib/ftl/ftl_init.o 00:03:18.404 CC lib/nvmf/transport.o 00:03:18.404 CC lib/ftl/ftl_layout.o 00:03:18.404 CC lib/nvmf/tcp.o 00:03:18.404 CC lib/ftl/ftl_debug.o 00:03:18.404 CC lib/nvmf/stubs.o 00:03:18.404 CC lib/ftl/ftl_io.o 00:03:18.404 CC lib/ftl/ftl_sb.o 00:03:18.404 CC lib/nvmf/mdns_server.o 00:03:18.404 CC lib/nvmf/vfio_user.o 00:03:18.404 CC lib/ftl/ftl_l2p.o 00:03:18.404 CC lib/ftl/ftl_l2p_flat.o 00:03:18.404 CC lib/nvmf/rdma.o 00:03:18.404 CC lib/ftl/ftl_nv_cache.o 00:03:18.404 CC lib/nvmf/auth.o 00:03:18.404 CC lib/ftl/ftl_band.o 00:03:18.404 CC lib/ftl/ftl_band_ops.o 00:03:18.404 CC lib/ftl/ftl_writer.o 00:03:18.404 CC lib/ftl/ftl_rq.o 00:03:18.404 CC lib/ftl/ftl_reloc.o 00:03:18.404 CC lib/ftl/ftl_l2p_cache.o 00:03:18.404 CC lib/ftl/ftl_p2l.o 00:03:18.404 CC lib/ftl/ftl_p2l_log.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.404 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.404 CC lib/ftl/utils/ftl_conf.o 00:03:18.404 CC lib/ftl/utils/ftl_md.o 00:03:18.404 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.404 CC lib/ftl/utils/ftl_mempool.o 00:03:18.404 CC lib/ftl/utils/ftl_property.o 00:03:18.404 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.404 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.404 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.404 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.404 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.404 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.404 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.404 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.404 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.404 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.404 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.404 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:18.404 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:18.404 CC lib/ftl/base/ftl_base_dev.o 00:03:18.663 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.663 CC lib/ftl/ftl_trace.o 00:03:19.235 LIB libspdk_nbd.a 00:03:19.235 SO libspdk_nbd.so.7.0 00:03:19.235 SYMLINK libspdk_nbd.so 00:03:19.235 LIB libspdk_scsi.a 00:03:19.235 SO libspdk_scsi.so.9.0 00:03:19.495 LIB libspdk_ublk.a 00:03:19.495 SYMLINK libspdk_scsi.so 00:03:19.495 SO libspdk_ublk.so.3.0 00:03:19.495 SYMLINK libspdk_ublk.so 00:03:19.755 LIB libspdk_ftl.a 00:03:19.755 CC lib/vhost/vhost.o 00:03:19.755 CC lib/vhost/vhost_rpc.o 00:03:19.755 CC lib/iscsi/conn.o 00:03:19.756 CC lib/vhost/vhost_scsi.o 00:03:19.756 CC lib/iscsi/init_grp.o 00:03:19.756 CC lib/vhost/vhost_blk.o 00:03:19.756 CC lib/vhost/rte_vhost_user.o 00:03:19.756 CC lib/iscsi/iscsi.o 00:03:19.756 CC lib/iscsi/param.o 00:03:19.756 CC lib/iscsi/portal_grp.o 00:03:19.756 CC lib/iscsi/tgt_node.o 00:03:19.756 CC lib/iscsi/iscsi_subsystem.o 00:03:19.756 CC lib/iscsi/iscsi_rpc.o 00:03:19.756 CC lib/iscsi/task.o 00:03:19.756 SO libspdk_ftl.so.9.0 00:03:20.017 SYMLINK libspdk_ftl.so 00:03:20.589 LIB libspdk_nvmf.a 00:03:20.589 SO libspdk_nvmf.so.20.0 00:03:20.856 LIB libspdk_vhost.a 00:03:20.856 SO libspdk_vhost.so.8.0 00:03:20.856 SYMLINK libspdk_nvmf.so 00:03:20.856 SYMLINK libspdk_vhost.so 00:03:21.151 LIB libspdk_iscsi.a 00:03:21.151 SO libspdk_iscsi.so.8.0 00:03:21.151 SYMLINK libspdk_iscsi.so 00:03:21.827 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.827 CC module/vfu_device/vfu_virtio.o 00:03:21.827 CC module/vfu_device/vfu_virtio_blk.o 00:03:21.827 CC module/vfu_device/vfu_virtio_scsi.o 00:03:21.827 CC module/vfu_device/vfu_virtio_rpc.o 00:03:21.827 CC module/vfu_device/vfu_virtio_fs.o 00:03:22.092 LIB libspdk_env_dpdk_rpc.a 00:03:22.092 CC module/accel/error/accel_error.o 00:03:22.092 CC module/accel/error/accel_error_rpc.o 00:03:22.092 CC module/blob/bdev/blob_bdev.o 00:03:22.092 CC module/accel/dsa/accel_dsa.o 00:03:22.092 CC module/accel/dsa/accel_dsa_rpc.o 00:03:22.092 CC module/accel/ioat/accel_ioat.o 00:03:22.092 CC module/accel/iaa/accel_iaa.o 00:03:22.092 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.092 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.092 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.092 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.092 CC module/fsdev/aio/fsdev_aio.o 00:03:22.092 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:22.092 CC module/sock/posix/posix.o 00:03:22.092 CC module/fsdev/aio/linux_aio_mgr.o 00:03:22.092 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.092 CC module/keyring/file/keyring.o 00:03:22.092 CC module/keyring/linux/keyring.o 00:03:22.092 CC module/keyring/file/keyring_rpc.o 00:03:22.092 CC module/keyring/linux/keyring_rpc.o 00:03:22.092 SO libspdk_env_dpdk_rpc.so.6.0 00:03:22.092 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.092 LIB libspdk_keyring_linux.a 00:03:22.092 LIB libspdk_scheduler_gscheduler.a 00:03:22.092 LIB libspdk_keyring_file.a 00:03:22.353 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.353 LIB libspdk_accel_error.a 00:03:22.353 SO libspdk_scheduler_gscheduler.so.4.0 00:03:22.353 LIB libspdk_accel_ioat.a 00:03:22.353 SO libspdk_keyring_linux.so.1.0 00:03:22.353 SO libspdk_keyring_file.so.2.0 00:03:22.353 LIB libspdk_scheduler_dynamic.a 00:03:22.353 LIB libspdk_accel_iaa.a 00:03:22.353 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:22.353 SO libspdk_accel_error.so.2.0 00:03:22.353 SO libspdk_accel_ioat.so.6.0 00:03:22.353 SO libspdk_accel_iaa.so.3.0 00:03:22.353 SO libspdk_scheduler_dynamic.so.4.0 00:03:22.353 LIB libspdk_blob_bdev.a 00:03:22.353 SYMLINK libspdk_scheduler_gscheduler.so 00:03:22.353 LIB libspdk_accel_dsa.a 00:03:22.353 SYMLINK libspdk_keyring_linux.so 00:03:22.353 SYMLINK libspdk_keyring_file.so 00:03:22.353 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:22.353 SYMLINK libspdk_accel_error.so 00:03:22.353 SO libspdk_blob_bdev.so.11.0 00:03:22.353 SYMLINK libspdk_accel_ioat.so 00:03:22.353 SO libspdk_accel_dsa.so.5.0 00:03:22.353 SYMLINK libspdk_scheduler_dynamic.so 00:03:22.353 SYMLINK libspdk_accel_iaa.so 00:03:22.353 LIB libspdk_vfu_device.a 00:03:22.353 SYMLINK libspdk_blob_bdev.so 00:03:22.353 SYMLINK libspdk_accel_dsa.so 00:03:22.353 SO libspdk_vfu_device.so.3.0 00:03:22.615 SYMLINK libspdk_vfu_device.so 00:03:22.615 LIB libspdk_fsdev_aio.a 00:03:22.615 SO libspdk_fsdev_aio.so.1.0 00:03:22.615 LIB libspdk_sock_posix.a 00:03:22.876 SO libspdk_sock_posix.so.6.0 00:03:22.876 SYMLINK libspdk_fsdev_aio.so 00:03:22.876 SYMLINK libspdk_sock_posix.so 00:03:22.876 CC module/bdev/delay/vbdev_delay.o 00:03:22.876 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.876 CC module/bdev/gpt/gpt.o 00:03:22.876 CC module/bdev/error/vbdev_error.o 00:03:22.876 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.876 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.137 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.137 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.137 CC module/bdev/split/vbdev_split.o 00:03:23.137 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.137 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:23.137 CC module/blobfs/bdev/blobfs_bdev.o 00:03:23.137 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.137 CC module/bdev/nvme/bdev_nvme.o 00:03:23.137 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.137 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:23.137 CC module/bdev/nvme/nvme_rpc.o 00:03:23.137 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.137 CC module/bdev/nvme/bdev_mdns_client.o 00:03:23.137 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:23.137 CC module/bdev/nvme/vbdev_opal.o 00:03:23.137 CC module/bdev/malloc/bdev_malloc.o 00:03:23.137 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.137 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:23.137 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:23.137 CC module/bdev/null/bdev_null.o 00:03:23.137 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:23.137 CC module/bdev/null/bdev_null_rpc.o 00:03:23.137 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:23.137 CC module/bdev/iscsi/bdev_iscsi.o 00:03:23.137 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:23.137 CC module/bdev/aio/bdev_aio.o 00:03:23.137 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:23.137 CC module/bdev/aio/bdev_aio_rpc.o 00:03:23.137 CC module/bdev/ftl/bdev_ftl.o 00:03:23.137 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:23.137 CC module/bdev/raid/bdev_raid.o 00:03:23.137 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.137 CC module/bdev/raid/bdev_raid_sb.o 00:03:23.137 CC module/bdev/raid/raid0.o 00:03:23.137 CC module/bdev/raid/raid1.o 00:03:23.137 CC module/bdev/raid/concat.o 00:03:23.399 LIB libspdk_blobfs_bdev.a 00:03:23.399 LIB libspdk_bdev_error.a 00:03:23.399 LIB libspdk_bdev_split.a 00:03:23.399 SO libspdk_blobfs_bdev.so.6.0 00:03:23.399 LIB libspdk_bdev_gpt.a 00:03:23.399 LIB libspdk_bdev_null.a 00:03:23.399 SO libspdk_bdev_split.so.6.0 00:03:23.399 SO libspdk_bdev_error.so.6.0 00:03:23.399 SO libspdk_bdev_gpt.so.6.0 00:03:23.399 SYMLINK libspdk_blobfs_bdev.so 00:03:23.399 LIB libspdk_bdev_zone_block.a 00:03:23.399 LIB libspdk_bdev_ftl.a 00:03:23.399 SO libspdk_bdev_null.so.6.0 00:03:23.399 LIB libspdk_bdev_passthru.a 00:03:23.399 SYMLINK libspdk_bdev_split.so 00:03:23.399 SYMLINK libspdk_bdev_gpt.so 00:03:23.399 SYMLINK libspdk_bdev_error.so 00:03:23.399 SO libspdk_bdev_ftl.so.6.0 00:03:23.399 SO libspdk_bdev_zone_block.so.6.0 00:03:23.399 LIB libspdk_bdev_delay.a 00:03:23.399 LIB libspdk_bdev_iscsi.a 00:03:23.399 LIB libspdk_bdev_aio.a 00:03:23.399 SO libspdk_bdev_passthru.so.6.0 00:03:23.399 LIB libspdk_bdev_malloc.a 00:03:23.399 SYMLINK libspdk_bdev_null.so 00:03:23.399 SO libspdk_bdev_delay.so.6.0 00:03:23.399 SO libspdk_bdev_iscsi.so.6.0 00:03:23.399 SO libspdk_bdev_aio.so.6.0 00:03:23.661 SYMLINK libspdk_bdev_zone_block.so 00:03:23.661 SO libspdk_bdev_malloc.so.6.0 00:03:23.661 SYMLINK libspdk_bdev_ftl.so 00:03:23.661 SYMLINK libspdk_bdev_passthru.so 00:03:23.661 LIB libspdk_bdev_lvol.a 00:03:23.661 SYMLINK libspdk_bdev_iscsi.so 00:03:23.661 SYMLINK libspdk_bdev_delay.so 00:03:23.661 SYMLINK libspdk_bdev_aio.so 00:03:23.661 SO libspdk_bdev_lvol.so.6.0 00:03:23.661 LIB libspdk_bdev_virtio.a 00:03:23.661 SYMLINK libspdk_bdev_malloc.so 00:03:23.661 SYMLINK libspdk_bdev_lvol.so 00:03:23.661 SO libspdk_bdev_virtio.so.6.0 00:03:23.661 SYMLINK libspdk_bdev_virtio.so 00:03:23.922 LIB libspdk_bdev_raid.a 00:03:24.184 SO libspdk_bdev_raid.so.6.0 00:03:24.184 SYMLINK libspdk_bdev_raid.so 00:03:25.569 LIB libspdk_bdev_nvme.a 00:03:25.569 SO libspdk_bdev_nvme.so.7.1 00:03:25.569 SYMLINK libspdk_bdev_nvme.so 00:03:26.511 CC module/event/subsystems/vmd/vmd.o 00:03:26.511 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:26.511 CC module/event/subsystems/iobuf/iobuf.o 00:03:26.511 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:26.511 CC module/event/subsystems/sock/sock.o 00:03:26.511 CC module/event/subsystems/scheduler/scheduler.o 00:03:26.511 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:26.511 CC module/event/subsystems/keyring/keyring.o 00:03:26.511 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:26.511 CC module/event/subsystems/fsdev/fsdev.o 00:03:26.511 LIB libspdk_event_vmd.a 00:03:26.511 LIB libspdk_event_keyring.a 00:03:26.511 LIB libspdk_event_fsdev.a 00:03:26.511 LIB libspdk_event_vhost_blk.a 00:03:26.511 LIB libspdk_event_sock.a 00:03:26.511 LIB libspdk_event_scheduler.a 00:03:26.511 LIB libspdk_event_vfu_tgt.a 00:03:26.511 LIB libspdk_event_iobuf.a 00:03:26.511 SO libspdk_event_keyring.so.1.0 00:03:26.511 SO libspdk_event_fsdev.so.1.0 00:03:26.511 SO libspdk_event_vmd.so.6.0 00:03:26.511 SO libspdk_event_vhost_blk.so.3.0 00:03:26.511 SO libspdk_event_vfu_tgt.so.3.0 00:03:26.511 SO libspdk_event_iobuf.so.3.0 00:03:26.511 SO libspdk_event_scheduler.so.4.0 00:03:26.511 SO libspdk_event_sock.so.5.0 00:03:26.511 SYMLINK libspdk_event_keyring.so 00:03:26.511 SYMLINK libspdk_event_fsdev.so 00:03:26.511 SYMLINK libspdk_event_vhost_blk.so 00:03:26.511 SYMLINK libspdk_event_vfu_tgt.so 00:03:26.511 SYMLINK libspdk_event_scheduler.so 00:03:26.511 SYMLINK libspdk_event_sock.so 00:03:26.511 SYMLINK libspdk_event_iobuf.so 00:03:26.511 SYMLINK libspdk_event_vmd.so 00:03:27.082 CC module/event/subsystems/accel/accel.o 00:03:27.082 LIB libspdk_event_accel.a 00:03:27.082 SO libspdk_event_accel.so.6.0 00:03:27.342 SYMLINK libspdk_event_accel.so 00:03:27.603 CC module/event/subsystems/bdev/bdev.o 00:03:27.864 LIB libspdk_event_bdev.a 00:03:27.864 SO libspdk_event_bdev.so.6.0 00:03:27.864 SYMLINK libspdk_event_bdev.so 00:03:28.125 CC module/event/subsystems/scsi/scsi.o 00:03:28.385 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:28.385 CC module/event/subsystems/nbd/nbd.o 00:03:28.385 CC module/event/subsystems/ublk/ublk.o 00:03:28.385 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:28.385 LIB libspdk_event_ublk.a 00:03:28.385 LIB libspdk_event_nbd.a 00:03:28.385 LIB libspdk_event_scsi.a 00:03:28.385 SO libspdk_event_ublk.so.3.0 00:03:28.385 SO libspdk_event_scsi.so.6.0 00:03:28.385 SO libspdk_event_nbd.so.6.0 00:03:28.645 LIB libspdk_event_nvmf.a 00:03:28.645 SYMLINK libspdk_event_ublk.so 00:03:28.645 SYMLINK libspdk_event_scsi.so 00:03:28.645 SYMLINK libspdk_event_nbd.so 00:03:28.645 SO libspdk_event_nvmf.so.6.0 00:03:28.645 SYMLINK libspdk_event_nvmf.so 00:03:28.906 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:28.906 CC module/event/subsystems/iscsi/iscsi.o 00:03:29.167 LIB libspdk_event_vhost_scsi.a 00:03:29.167 SO libspdk_event_vhost_scsi.so.3.0 00:03:29.167 LIB libspdk_event_iscsi.a 00:03:29.167 SO libspdk_event_iscsi.so.6.0 00:03:29.167 SYMLINK libspdk_event_vhost_scsi.so 00:03:29.167 SYMLINK libspdk_event_iscsi.so 00:03:29.429 SO libspdk.so.6.0 00:03:29.429 SYMLINK libspdk.so 00:03:29.693 CXX app/trace/trace.o 00:03:29.955 TEST_HEADER include/spdk/accel.h 00:03:29.955 TEST_HEADER include/spdk/accel_module.h 00:03:29.955 TEST_HEADER include/spdk/barrier.h 00:03:29.955 TEST_HEADER include/spdk/bdev.h 00:03:29.955 TEST_HEADER include/spdk/assert.h 00:03:29.955 TEST_HEADER include/spdk/base64.h 00:03:29.955 CC test/rpc_client/rpc_client_test.o 00:03:29.955 CC app/trace_record/trace_record.o 00:03:29.955 TEST_HEADER include/spdk/bdev_module.h 00:03:29.955 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.955 TEST_HEADER include/spdk/bit_pool.h 00:03:29.955 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.955 TEST_HEADER include/spdk/bit_array.h 00:03:29.955 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.955 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.955 CC app/spdk_lspci/spdk_lspci.o 00:03:29.955 TEST_HEADER include/spdk/blobfs.h 00:03:29.955 CC app/spdk_top/spdk_top.o 00:03:29.955 TEST_HEADER include/spdk/blob.h 00:03:29.955 TEST_HEADER include/spdk/conf.h 00:03:29.955 CC app/spdk_nvme_perf/perf.o 00:03:29.955 TEST_HEADER include/spdk/config.h 00:03:29.955 TEST_HEADER include/spdk/cpuset.h 00:03:29.955 TEST_HEADER include/spdk/crc16.h 00:03:29.955 TEST_HEADER include/spdk/crc32.h 00:03:29.955 TEST_HEADER include/spdk/crc64.h 00:03:29.955 CC app/spdk_nvme_identify/identify.o 00:03:29.955 TEST_HEADER include/spdk/dif.h 00:03:29.955 TEST_HEADER include/spdk/dma.h 00:03:29.955 TEST_HEADER include/spdk/endian.h 00:03:29.955 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.955 TEST_HEADER include/spdk/env.h 00:03:29.955 TEST_HEADER include/spdk/event.h 00:03:29.955 TEST_HEADER include/spdk/fd_group.h 00:03:29.955 TEST_HEADER include/spdk/fd.h 00:03:29.955 TEST_HEADER include/spdk/file.h 00:03:29.955 TEST_HEADER include/spdk/fsdev.h 00:03:29.955 TEST_HEADER include/spdk/fsdev_module.h 00:03:29.955 TEST_HEADER include/spdk/ftl.h 00:03:29.955 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:29.955 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:29.955 TEST_HEADER include/spdk/hexlify.h 00:03:29.955 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.955 TEST_HEADER include/spdk/histogram_data.h 00:03:29.955 TEST_HEADER include/spdk/idxd.h 00:03:29.955 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.955 TEST_HEADER include/spdk/init.h 00:03:29.955 TEST_HEADER include/spdk/ioat.h 00:03:29.955 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.955 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.955 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.955 TEST_HEADER include/spdk/json.h 00:03:29.955 TEST_HEADER include/spdk/keyring.h 00:03:29.955 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.955 TEST_HEADER include/spdk/keyring_module.h 00:03:29.955 TEST_HEADER include/spdk/log.h 00:03:29.955 TEST_HEADER include/spdk/likely.h 00:03:29.955 TEST_HEADER include/spdk/md5.h 00:03:29.955 TEST_HEADER include/spdk/lvol.h 00:03:29.955 CC app/nvmf_tgt/nvmf_main.o 00:03:29.955 TEST_HEADER include/spdk/memory.h 00:03:29.955 CC app/spdk_dd/spdk_dd.o 00:03:29.955 TEST_HEADER include/spdk/mmio.h 00:03:29.955 TEST_HEADER include/spdk/nbd.h 00:03:29.955 TEST_HEADER include/spdk/net.h 00:03:29.955 TEST_HEADER include/spdk/notify.h 00:03:29.955 TEST_HEADER include/spdk/nvme.h 00:03:29.955 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.955 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.955 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.955 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.955 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.955 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.955 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.955 CC app/spdk_tgt/spdk_tgt.o 00:03:29.955 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.955 TEST_HEADER include/spdk/nvmf.h 00:03:29.955 TEST_HEADER include/spdk/opal.h 00:03:29.955 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.955 TEST_HEADER include/spdk/opal_spec.h 00:03:29.955 TEST_HEADER include/spdk/queue.h 00:03:29.955 TEST_HEADER include/spdk/pipe.h 00:03:29.955 TEST_HEADER include/spdk/pci_ids.h 00:03:29.955 TEST_HEADER include/spdk/reduce.h 00:03:29.955 TEST_HEADER include/spdk/rpc.h 00:03:29.955 TEST_HEADER include/spdk/scheduler.h 00:03:29.955 TEST_HEADER include/spdk/scsi.h 00:03:29.955 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.955 TEST_HEADER include/spdk/sock.h 00:03:29.955 TEST_HEADER include/spdk/stdinc.h 00:03:29.955 TEST_HEADER include/spdk/string.h 00:03:29.955 TEST_HEADER include/spdk/thread.h 00:03:29.955 TEST_HEADER include/spdk/trace.h 00:03:29.955 TEST_HEADER include/spdk/trace_parser.h 00:03:29.955 TEST_HEADER include/spdk/ublk.h 00:03:29.955 TEST_HEADER include/spdk/tree.h 00:03:29.955 TEST_HEADER include/spdk/version.h 00:03:29.955 TEST_HEADER include/spdk/util.h 00:03:29.955 TEST_HEADER include/spdk/uuid.h 00:03:29.955 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.955 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.955 TEST_HEADER include/spdk/vhost.h 00:03:29.955 TEST_HEADER include/spdk/vmd.h 00:03:29.955 TEST_HEADER include/spdk/xor.h 00:03:29.955 TEST_HEADER include/spdk/zipf.h 00:03:29.955 CXX test/cpp_headers/accel.o 00:03:29.955 CXX test/cpp_headers/accel_module.o 00:03:29.955 CXX test/cpp_headers/assert.o 00:03:29.955 CXX test/cpp_headers/barrier.o 00:03:29.955 CXX test/cpp_headers/base64.o 00:03:29.955 CXX test/cpp_headers/bdev.o 00:03:29.955 CXX test/cpp_headers/bdev_module.o 00:03:29.955 CXX test/cpp_headers/bdev_zone.o 00:03:29.955 CXX test/cpp_headers/bit_array.o 00:03:29.955 CXX test/cpp_headers/bit_pool.o 00:03:29.955 CXX test/cpp_headers/blob_bdev.o 00:03:29.955 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.955 CXX test/cpp_headers/blobfs.o 00:03:29.955 CXX test/cpp_headers/blob.o 00:03:29.955 CXX test/cpp_headers/config.o 00:03:29.955 CXX test/cpp_headers/conf.o 00:03:29.955 CXX test/cpp_headers/cpuset.o 00:03:29.955 CXX test/cpp_headers/crc16.o 00:03:29.955 CXX test/cpp_headers/crc32.o 00:03:29.955 CXX test/cpp_headers/crc64.o 00:03:29.955 CXX test/cpp_headers/dif.o 00:03:29.955 CXX test/cpp_headers/endian.o 00:03:29.955 CXX test/cpp_headers/dma.o 00:03:29.955 CXX test/cpp_headers/env_dpdk.o 00:03:29.955 CXX test/cpp_headers/fd_group.o 00:03:29.955 CXX test/cpp_headers/env.o 00:03:29.955 CXX test/cpp_headers/event.o 00:03:29.955 CXX test/cpp_headers/fd.o 00:03:29.955 CXX test/cpp_headers/file.o 00:03:29.955 CXX test/cpp_headers/fsdev.o 00:03:29.955 CXX test/cpp_headers/ftl.o 00:03:29.955 CXX test/cpp_headers/fsdev_module.o 00:03:29.955 CXX test/cpp_headers/fuse_dispatcher.o 00:03:29.955 CXX test/cpp_headers/gpt_spec.o 00:03:29.955 CXX test/cpp_headers/hexlify.o 00:03:29.955 CXX test/cpp_headers/idxd.o 00:03:29.955 CXX test/cpp_headers/histogram_data.o 00:03:29.955 CXX test/cpp_headers/idxd_spec.o 00:03:29.955 CXX test/cpp_headers/init.o 00:03:29.956 CXX test/cpp_headers/ioat.o 00:03:29.956 CXX test/cpp_headers/iscsi_spec.o 00:03:29.956 CXX test/cpp_headers/ioat_spec.o 00:03:29.956 CXX test/cpp_headers/json.o 00:03:29.956 CXX test/cpp_headers/keyring.o 00:03:29.956 CXX test/cpp_headers/keyring_module.o 00:03:29.956 CXX test/cpp_headers/jsonrpc.o 00:03:29.956 CXX test/cpp_headers/likely.o 00:03:29.956 CXX test/cpp_headers/log.o 00:03:29.956 CXX test/cpp_headers/lvol.o 00:03:29.956 CXX test/cpp_headers/memory.o 00:03:29.956 CXX test/cpp_headers/md5.o 00:03:29.956 CXX test/cpp_headers/nbd.o 00:03:29.956 CXX test/cpp_headers/mmio.o 00:03:29.956 CC test/app/histogram_perf/histogram_perf.o 00:03:29.956 CXX test/cpp_headers/nvme.o 00:03:29.956 CXX test/cpp_headers/net.o 00:03:29.956 CXX test/cpp_headers/notify.o 00:03:29.956 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.956 CC test/app/jsoncat/jsoncat.o 00:03:29.956 CXX test/cpp_headers/nvme_intel.o 00:03:29.956 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.956 CXX test/cpp_headers/nvme_zns.o 00:03:29.956 CXX test/cpp_headers/nvme_spec.o 00:03:29.956 CC test/app/stub/stub.o 00:03:29.956 CC examples/util/zipf/zipf.o 00:03:29.956 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.956 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.956 CXX test/cpp_headers/nvmf.o 00:03:29.956 CXX test/cpp_headers/nvmf_spec.o 00:03:30.222 CXX test/cpp_headers/nvmf_transport.o 00:03:30.222 CXX test/cpp_headers/opal.o 00:03:30.222 CXX test/cpp_headers/opal_spec.o 00:03:30.222 CXX test/cpp_headers/pipe.o 00:03:30.222 CXX test/cpp_headers/pci_ids.o 00:03:30.222 CXX test/cpp_headers/queue.o 00:03:30.222 CXX test/cpp_headers/rpc.o 00:03:30.222 CXX test/cpp_headers/reduce.o 00:03:30.222 CC examples/ioat/verify/verify.o 00:03:30.222 CC examples/ioat/perf/perf.o 00:03:30.222 CXX test/cpp_headers/scheduler.o 00:03:30.222 CXX test/cpp_headers/scsi.o 00:03:30.222 CXX test/cpp_headers/sock.o 00:03:30.222 CC test/thread/poller_perf/poller_perf.o 00:03:30.222 CXX test/cpp_headers/scsi_spec.o 00:03:30.222 LINK spdk_lspci 00:03:30.222 CXX test/cpp_headers/string.o 00:03:30.222 CXX test/cpp_headers/stdinc.o 00:03:30.222 CXX test/cpp_headers/trace.o 00:03:30.222 CXX test/cpp_headers/thread.o 00:03:30.222 CXX test/cpp_headers/tree.o 00:03:30.222 CXX test/cpp_headers/util.o 00:03:30.222 CXX test/cpp_headers/ublk.o 00:03:30.222 CXX test/cpp_headers/trace_parser.o 00:03:30.222 CC test/env/memory/memory_ut.o 00:03:30.222 CXX test/cpp_headers/uuid.o 00:03:30.222 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.222 CXX test/cpp_headers/version.o 00:03:30.222 CXX test/cpp_headers/xor.o 00:03:30.222 CXX test/cpp_headers/vhost.o 00:03:30.223 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.223 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:30.223 CXX test/cpp_headers/vmd.o 00:03:30.223 CXX test/cpp_headers/zipf.o 00:03:30.223 CC test/env/vtophys/vtophys.o 00:03:30.223 CC test/env/pci/pci_ut.o 00:03:30.223 CC test/app/bdev_svc/bdev_svc.o 00:03:30.223 CC test/dma/test_dma/test_dma.o 00:03:30.223 CC app/fio/nvme/fio_plugin.o 00:03:30.223 CC app/fio/bdev/fio_plugin.o 00:03:30.223 LINK rpc_client_test 00:03:30.223 LINK spdk_nvme_discover 00:03:30.497 LINK interrupt_tgt 00:03:30.497 LINK iscsi_tgt 00:03:30.497 LINK spdk_trace_record 00:03:30.497 LINK nvmf_tgt 00:03:30.758 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:30.758 LINK spdk_tgt 00:03:30.758 CC test/env/mem_callbacks/mem_callbacks.o 00:03:30.758 LINK zipf 00:03:30.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.758 LINK jsoncat 00:03:30.758 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:31.020 LINK verify 00:03:31.020 LINK poller_perf 00:03:31.020 LINK vtophys 00:03:31.020 LINK stub 00:03:31.281 LINK spdk_trace 00:03:31.281 LINK env_dpdk_post_init 00:03:31.281 LINK ioat_perf 00:03:31.281 LINK spdk_dd 00:03:31.281 LINK bdev_svc 00:03:31.281 LINK histogram_perf 00:03:31.281 LINK test_dma 00:03:31.281 LINK spdk_nvme_perf 00:03:31.543 CC examples/sock/hello_world/hello_sock.o 00:03:31.543 CC examples/idxd/perf/perf.o 00:03:31.543 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.543 LINK spdk_top 00:03:31.543 CC examples/vmd/led/led.o 00:03:31.543 CC examples/thread/thread/thread_ex.o 00:03:31.543 LINK nvme_fuzz 00:03:31.543 LINK vhost_fuzz 00:03:31.543 LINK pci_ut 00:03:31.543 LINK spdk_nvme 00:03:31.543 LINK spdk_bdev 00:03:31.543 CC app/vhost/vhost.o 00:03:31.543 LINK lsvmd 00:03:31.805 LINK led 00:03:31.805 LINK mem_callbacks 00:03:31.805 LINK spdk_nvme_identify 00:03:31.805 CC test/event/reactor/reactor.o 00:03:31.805 LINK hello_sock 00:03:31.805 CC test/event/event_perf/event_perf.o 00:03:31.805 CC test/event/reactor_perf/reactor_perf.o 00:03:31.805 CC test/event/app_repeat/app_repeat.o 00:03:31.805 CC test/event/scheduler/scheduler.o 00:03:31.805 LINK thread 00:03:31.805 LINK idxd_perf 00:03:31.805 LINK vhost 00:03:32.066 LINK reactor 00:03:32.066 CC test/nvme/sgl/sgl.o 00:03:32.066 LINK event_perf 00:03:32.066 LINK reactor_perf 00:03:32.066 CC test/nvme/err_injection/err_injection.o 00:03:32.066 CC test/nvme/simple_copy/simple_copy.o 00:03:32.066 CC test/nvme/reserve/reserve.o 00:03:32.066 CC test/nvme/aer/aer.o 00:03:32.066 CC test/nvme/e2edp/nvme_dp.o 00:03:32.066 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:32.066 CC test/nvme/reset/reset.o 00:03:32.066 CC test/nvme/startup/startup.o 00:03:32.066 CC test/nvme/boot_partition/boot_partition.o 00:03:32.066 CC test/nvme/cuse/cuse.o 00:03:32.066 CC test/nvme/connect_stress/connect_stress.o 00:03:32.066 CC test/nvme/compliance/nvme_compliance.o 00:03:32.066 CC test/nvme/fused_ordering/fused_ordering.o 00:03:32.066 CC test/nvme/overhead/overhead.o 00:03:32.066 LINK app_repeat 00:03:32.066 CC test/nvme/fdp/fdp.o 00:03:32.066 CC test/blobfs/mkfs/mkfs.o 00:03:32.066 CC test/accel/dif/dif.o 00:03:32.066 LINK scheduler 00:03:32.066 CC test/lvol/esnap/esnap.o 00:03:32.329 LINK err_injection 00:03:32.329 LINK boot_partition 00:03:32.329 LINK connect_stress 00:03:32.329 LINK doorbell_aers 00:03:32.329 LINK startup 00:03:32.329 LINK simple_copy 00:03:32.329 LINK memory_ut 00:03:32.329 LINK fused_ordering 00:03:32.329 LINK reserve 00:03:32.329 LINK sgl 00:03:32.329 CC examples/nvme/reconnect/reconnect.o 00:03:32.329 LINK mkfs 00:03:32.329 LINK nvme_dp 00:03:32.329 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:32.329 CC examples/nvme/hello_world/hello_world.o 00:03:32.329 CC examples/nvme/arbitration/arbitration.o 00:03:32.329 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:32.329 CC examples/nvme/hotplug/hotplug.o 00:03:32.329 LINK reset 00:03:32.329 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.329 CC examples/nvme/abort/abort.o 00:03:32.329 LINK aer 00:03:32.329 LINK overhead 00:03:32.329 LINK nvme_compliance 00:03:32.329 LINK fdp 00:03:32.590 CC examples/accel/perf/accel_perf.o 00:03:32.590 CC examples/blob/hello_world/hello_blob.o 00:03:32.590 CC examples/blob/cli/blobcli.o 00:03:32.590 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:32.590 LINK cmb_copy 00:03:32.590 LINK pmr_persistence 00:03:32.590 LINK hotplug 00:03:32.590 LINK hello_world 00:03:32.590 LINK iscsi_fuzz 00:03:32.590 LINK arbitration 00:03:32.590 LINK reconnect 00:03:32.590 LINK abort 00:03:32.590 LINK dif 00:03:32.851 LINK hello_blob 00:03:32.851 LINK nvme_manage 00:03:32.851 LINK hello_fsdev 00:03:32.851 LINK accel_perf 00:03:33.113 LINK blobcli 00:03:33.375 LINK cuse 00:03:33.375 CC test/bdev/bdevio/bdevio.o 00:03:33.636 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.636 CC examples/bdev/bdevperf/bdevperf.o 00:03:33.636 LINK bdevio 00:03:33.898 LINK hello_bdev 00:03:34.159 LINK bdevperf 00:03:35.102 CC examples/nvmf/nvmf/nvmf.o 00:03:35.102 LINK nvmf 00:03:37.018 LINK esnap 00:03:37.018 00:03:37.018 real 0m56.034s 00:03:37.018 user 8m9.694s 00:03:37.018 sys 5m39.285s 00:03:37.018 15:13:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.018 15:13:25 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.018 ************************************ 00:03:37.018 END TEST make 00:03:37.018 ************************************ 00:03:37.018 15:13:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.018 15:13:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.018 15:13:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.018 15:13:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.018 15:13:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.018 15:13:25 -- pm/common@44 -- $ pid=273810 00:03:37.018 15:13:25 -- pm/common@50 -- $ kill -TERM 273810 00:03:37.018 15:13:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.018 15:13:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.018 15:13:25 -- pm/common@44 -- $ pid=273811 00:03:37.018 15:13:25 -- pm/common@50 -- $ kill -TERM 273811 00:03:37.018 15:13:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.018 15:13:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:37.018 15:13:25 -- pm/common@44 -- $ pid=273813 00:03:37.018 15:13:25 -- pm/common@50 -- $ kill -TERM 273813 00:03:37.018 15:13:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.018 15:13:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:37.018 15:13:25 -- pm/common@44 -- $ pid=273837 00:03:37.018 15:13:25 -- pm/common@50 -- $ sudo -E kill -TERM 273837 00:03:37.018 15:13:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:37.018 15:13:25 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:37.282 15:13:25 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:37.282 15:13:25 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:37.282 15:13:25 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:37.282 15:13:26 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:37.282 15:13:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.282 15:13:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.282 15:13:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.282 15:13:26 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.282 15:13:26 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.283 15:13:26 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.283 15:13:26 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.283 15:13:26 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.283 15:13:26 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.283 15:13:26 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.283 15:13:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.283 15:13:26 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.283 15:13:26 -- scripts/common.sh@345 -- # : 1 00:03:37.283 15:13:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.283 15:13:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.283 15:13:26 -- scripts/common.sh@365 -- # decimal 1 00:03:37.283 15:13:26 -- scripts/common.sh@353 -- # local d=1 00:03:37.283 15:13:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.283 15:13:26 -- scripts/common.sh@355 -- # echo 1 00:03:37.283 15:13:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.283 15:13:26 -- scripts/common.sh@366 -- # decimal 2 00:03:37.283 15:13:26 -- scripts/common.sh@353 -- # local d=2 00:03:37.283 15:13:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.283 15:13:26 -- scripts/common.sh@355 -- # echo 2 00:03:37.283 15:13:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.283 15:13:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.283 15:13:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.283 15:13:26 -- scripts/common.sh@368 -- # return 0 00:03:37.283 15:13:26 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.283 15:13:26 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:37.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.283 --rc genhtml_branch_coverage=1 00:03:37.283 --rc genhtml_function_coverage=1 00:03:37.283 --rc genhtml_legend=1 00:03:37.283 --rc geninfo_all_blocks=1 00:03:37.283 --rc geninfo_unexecuted_blocks=1 00:03:37.283 00:03:37.283 ' 00:03:37.283 15:13:26 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:37.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.283 --rc genhtml_branch_coverage=1 00:03:37.283 --rc genhtml_function_coverage=1 00:03:37.283 --rc genhtml_legend=1 00:03:37.283 --rc geninfo_all_blocks=1 00:03:37.283 --rc geninfo_unexecuted_blocks=1 00:03:37.283 00:03:37.283 ' 00:03:37.283 15:13:26 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:37.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.283 --rc genhtml_branch_coverage=1 00:03:37.283 --rc genhtml_function_coverage=1 00:03:37.283 --rc genhtml_legend=1 00:03:37.283 --rc geninfo_all_blocks=1 00:03:37.283 --rc geninfo_unexecuted_blocks=1 00:03:37.283 00:03:37.283 ' 00:03:37.283 15:13:26 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:37.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.283 --rc genhtml_branch_coverage=1 00:03:37.283 --rc genhtml_function_coverage=1 00:03:37.283 --rc genhtml_legend=1 00:03:37.283 --rc geninfo_all_blocks=1 00:03:37.283 --rc geninfo_unexecuted_blocks=1 00:03:37.283 00:03:37.283 ' 00:03:37.283 15:13:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:37.283 15:13:26 -- nvmf/common.sh@7 -- # uname -s 00:03:37.283 15:13:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.283 15:13:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.283 15:13:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.283 15:13:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.283 15:13:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.283 15:13:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.283 15:13:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.283 15:13:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.283 15:13:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.283 15:13:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.283 15:13:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:37.283 15:13:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:37.283 15:13:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.283 15:13:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.283 15:13:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:37.283 15:13:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.283 15:13:26 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:37.283 15:13:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.283 15:13:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.283 15:13:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.283 15:13:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.283 15:13:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.283 15:13:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.283 15:13:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.283 15:13:26 -- paths/export.sh@5 -- # export PATH 00:03:37.283 15:13:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.283 15:13:26 -- nvmf/common.sh@51 -- # : 0 00:03:37.283 15:13:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.283 15:13:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.283 15:13:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.283 15:13:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.283 15:13:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.283 15:13:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.283 15:13:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.283 15:13:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.283 15:13:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.283 15:13:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.283 15:13:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.283 15:13:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.283 15:13:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.283 15:13:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:37.283 15:13:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.283 15:13:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:37.283 15:13:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.283 15:13:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.283 15:13:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.283 15:13:26 -- spdk/autotest.sh@48 -- # udevadm_pid=339907 00:03:37.283 15:13:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.283 15:13:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.283 15:13:26 -- pm/common@17 -- # local monitor 00:03:37.283 15:13:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.283 15:13:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.283 15:13:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.283 15:13:26 -- pm/common@21 -- # date +%s 00:03:37.283 15:13:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.283 15:13:26 -- pm/common@21 -- # date +%s 00:03:37.283 15:13:26 -- pm/common@25 -- # sleep 1 00:03:37.283 15:13:26 -- pm/common@21 -- # date +%s 00:03:37.283 15:13:26 -- pm/common@21 -- # date +%s 00:03:37.283 15:13:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732112006 00:03:37.283 15:13:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732112006 00:03:37.283 15:13:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732112006 00:03:37.283 15:13:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732112006 00:03:37.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732112006_collect-cpu-load.pm.log 00:03:37.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732112006_collect-vmstat.pm.log 00:03:37.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732112006_collect-cpu-temp.pm.log 00:03:37.283 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732112006_collect-bmc-pm.bmc.pm.log 00:03:38.230 15:13:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.230 15:13:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:38.230 15:13:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.230 15:13:27 -- common/autotest_common.sh@10 -- # set +x 00:03:38.230 15:13:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:38.230 15:13:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:38.230 15:13:27 -- common/autotest_common.sh@10 -- # set +x 00:03:38.491 15:13:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:38.491 15:13:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.491 15:13:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.491 15:13:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:38.491 15:13:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:38.491 15:13:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:38.491 15:13:27 -- common/autotest_common.sh@1457 -- # uname 00:03:38.491 15:13:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:38.491 15:13:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:38.491 15:13:27 -- common/autotest_common.sh@1477 -- # uname 00:03:38.491 15:13:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:38.491 15:13:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:38.491 15:13:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:38.491 lcov: LCOV version 1.15 00:03:38.492 15:13:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.403 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.526 15:13:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.526 15:13:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.526 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:04:11.526 15:13:57 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.526 15:13:57 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.470 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:12.470 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:12.730 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:12.730 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:12.991 15:14:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.991 15:14:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:12.991 15:14:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:12.991 15:14:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:12.991 15:14:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.991 15:14:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:12.991 15:14:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:12.991 15:14:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.991 15:14:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.991 15:14:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.991 15:14:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.991 15:14:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.991 15:14:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.991 15:14:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.991 15:14:01 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:13.252 No valid GPT data, bailing 00:04:13.252 15:14:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.252 15:14:01 -- scripts/common.sh@394 -- # pt= 00:04:13.252 15:14:01 -- scripts/common.sh@395 -- # return 1 00:04:13.252 15:14:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:13.252 1+0 records in 00:04:13.252 1+0 records out 00:04:13.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00194811 s, 538 MB/s 00:04:13.252 15:14:01 -- spdk/autotest.sh@105 -- # sync 00:04:13.252 15:14:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:13.252 15:14:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:13.252 15:14:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:23.263 15:14:10 -- spdk/autotest.sh@111 -- # uname -s 00:04:23.263 15:14:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:23.263 15:14:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:23.263 15:14:10 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:25.179 Hugepages 00:04:25.179 node hugesize free / total 00:04:25.179 node0 1048576kB 0 / 0 00:04:25.179 node0 2048kB 0 / 0 00:04:25.179 node1 1048576kB 0 / 0 00:04:25.179 node1 2048kB 0 / 0 00:04:25.179 00:04:25.179 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.179 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:25.179 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:25.441 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:25.441 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:25.441 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:25.441 15:14:14 -- spdk/autotest.sh@117 -- # uname -s 00:04:25.441 15:14:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:25.441 15:14:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:25.441 15:14:14 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.832 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.832 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.832 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.832 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:29.093 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:31.007 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:31.268 15:14:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:32.212 15:14:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:32.212 15:14:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:32.212 15:14:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.212 15:14:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:32.212 15:14:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.212 15:14:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.212 15:14:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.212 15:14:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.212 15:14:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.212 15:14:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:32.212 15:14:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:32.212 15:14:21 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.421 Waiting for block devices as requested 00:04:36.421 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:36.421 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:36.682 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:36.682 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:36.682 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:36.943 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:36.943 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:36.943 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:37.204 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:37.204 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:37.465 15:14:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.465 15:14:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:37.465 15:14:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:37.465 15:14:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:37.465 15:14:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.465 15:14:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.465 15:14:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:37.465 15:14:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.465 15:14:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:37.465 15:14:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:37.465 15:14:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:37.465 15:14:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.465 15:14:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:37.465 15:14:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:37.465 15:14:26 -- common/autotest_common.sh@1543 -- # continue 00:04:37.465 15:14:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:37.465 15:14:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.465 15:14:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.465 15:14:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:37.465 15:14:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.465 15:14:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.465 15:14:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.674 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.674 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:41.674 15:14:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:41.674 15:14:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.674 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:04:41.674 15:14:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:41.674 15:14:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:41.674 15:14:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.674 15:14:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.674 15:14:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:41.674 15:14:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:41.674 15:14:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:41.674 15:14:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:41.674 15:14:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.674 15:14:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.674 15:14:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.674 15:14:30 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.674 15:14:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.674 15:14:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:41.674 15:14:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:41.674 15:14:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:41.674 15:14:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:41.674 15:14:30 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:41.674 15:14:30 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:41.674 15:14:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:41.674 15:14:30 -- common/autotest_common.sh@1572 -- # return 0 00:04:41.674 15:14:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:41.674 15:14:30 -- common/autotest_common.sh@1580 -- # return 0 00:04:41.674 15:14:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:41.674 15:14:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:41.674 15:14:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.674 15:14:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.674 15:14:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:41.674 15:14:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.674 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:04:41.674 15:14:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:41.674 15:14:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.674 15:14:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.674 15:14:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.674 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:04:41.935 ************************************ 00:04:41.935 START TEST env 00:04:41.935 ************************************ 00:04:41.935 15:14:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.935 * Looking for test storage... 00:04:41.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.935 15:14:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.935 15:14:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.935 15:14:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.935 15:14:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.935 15:14:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.935 15:14:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.935 15:14:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.935 15:14:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.935 15:14:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.935 15:14:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.935 15:14:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.936 15:14:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.936 15:14:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.936 15:14:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.936 15:14:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.936 15:14:30 env -- scripts/common.sh@344 -- # case "$op" in 00:04:41.936 15:14:30 env -- scripts/common.sh@345 -- # : 1 00:04:41.936 15:14:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.936 15:14:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.936 15:14:30 env -- scripts/common.sh@365 -- # decimal 1 00:04:41.936 15:14:30 env -- scripts/common.sh@353 -- # local d=1 00:04:41.936 15:14:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.936 15:14:30 env -- scripts/common.sh@355 -- # echo 1 00:04:41.936 15:14:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.936 15:14:30 env -- scripts/common.sh@366 -- # decimal 2 00:04:41.936 15:14:30 env -- scripts/common.sh@353 -- # local d=2 00:04:41.936 15:14:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.936 15:14:30 env -- scripts/common.sh@355 -- # echo 2 00:04:41.936 15:14:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.936 15:14:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.936 15:14:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.936 15:14:30 env -- scripts/common.sh@368 -- # return 0 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.936 --rc genhtml_branch_coverage=1 00:04:41.936 --rc genhtml_function_coverage=1 00:04:41.936 --rc genhtml_legend=1 00:04:41.936 --rc geninfo_all_blocks=1 00:04:41.936 --rc geninfo_unexecuted_blocks=1 00:04:41.936 00:04:41.936 ' 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.936 --rc genhtml_branch_coverage=1 00:04:41.936 --rc genhtml_function_coverage=1 00:04:41.936 --rc genhtml_legend=1 00:04:41.936 --rc geninfo_all_blocks=1 00:04:41.936 --rc geninfo_unexecuted_blocks=1 00:04:41.936 00:04:41.936 ' 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.936 --rc genhtml_branch_coverage=1 00:04:41.936 --rc genhtml_function_coverage=1 00:04:41.936 --rc genhtml_legend=1 00:04:41.936 --rc geninfo_all_blocks=1 00:04:41.936 --rc geninfo_unexecuted_blocks=1 00:04:41.936 00:04:41.936 ' 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.936 --rc genhtml_branch_coverage=1 00:04:41.936 --rc genhtml_function_coverage=1 00:04:41.936 --rc genhtml_legend=1 00:04:41.936 --rc geninfo_all_blocks=1 00:04:41.936 --rc geninfo_unexecuted_blocks=1 00:04:41.936 00:04:41.936 ' 00:04:41.936 15:14:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.936 15:14:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.936 15:14:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.936 ************************************ 00:04:41.936 START TEST env_memory 00:04:41.936 ************************************ 00:04:41.936 15:14:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.936 00:04:41.936 00:04:41.936 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.936 http://cunit.sourceforge.net/ 00:04:41.936 00:04:41.936 00:04:41.936 Suite: memory 00:04:42.198 Test: alloc and free memory map ...[2024-11-20 15:14:30.933812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.198 passed 00:04:42.198 Test: mem map translation ...[2024-11-20 15:14:30.959525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.198 [2024-11-20 15:14:30.959569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.198 [2024-11-20 15:14:30.959615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.198 [2024-11-20 15:14:30.959622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.198 passed 00:04:42.198 Test: mem map registration ...[2024-11-20 15:14:31.014926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:42.198 [2024-11-20 15:14:31.014949] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:42.198 passed 00:04:42.198 Test: mem map adjacent registrations ...passed 00:04:42.198 00:04:42.198 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.198 suites 1 1 n/a 0 0 00:04:42.198 tests 4 4 4 0 0 00:04:42.198 asserts 152 152 152 0 n/a 00:04:42.198 00:04:42.198 Elapsed time = 0.197 seconds 00:04:42.198 00:04:42.198 real 0m0.212s 00:04:42.198 user 0m0.201s 00:04:42.198 sys 0m0.010s 00:04:42.198 15:14:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.198 15:14:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.198 ************************************ 00:04:42.198 END TEST env_memory 00:04:42.198 ************************************ 00:04:42.198 15:14:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.198 15:14:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.198 15:14:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.198 15:14:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.460 ************************************ 00:04:42.460 START TEST env_vtophys 00:04:42.460 ************************************ 00:04:42.460 15:14:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.460 EAL: lib.eal log level changed from notice to debug 00:04:42.460 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.460 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.460 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.460 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.460 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.460 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.460 EAL: Detected lcore 6 as core 6 on socket 0 00:04:42.460 EAL: Detected lcore 7 as core 7 on socket 0 00:04:42.460 EAL: Detected lcore 8 as core 8 on socket 0 00:04:42.460 EAL: Detected lcore 9 as core 9 on socket 0 00:04:42.460 EAL: Detected lcore 10 as core 10 on socket 0 00:04:42.460 EAL: Detected lcore 11 as core 11 on socket 0 00:04:42.460 EAL: Detected lcore 12 as core 12 on socket 0 00:04:42.460 EAL: Detected lcore 13 as core 13 on socket 0 00:04:42.460 EAL: Detected lcore 14 as core 14 on socket 0 00:04:42.460 EAL: Detected lcore 15 as core 15 on socket 0 00:04:42.460 EAL: Detected lcore 16 as core 16 on socket 0 00:04:42.460 EAL: Detected lcore 17 as core 17 on socket 0 00:04:42.460 EAL: Detected lcore 18 as core 18 on socket 0 00:04:42.460 EAL: Detected lcore 19 as core 19 on socket 0 00:04:42.460 EAL: Detected lcore 20 as core 20 on socket 0 00:04:42.460 EAL: Detected lcore 21 as core 21 on socket 0 00:04:42.460 EAL: Detected lcore 22 as core 22 on socket 0 00:04:42.460 EAL: Detected lcore 23 as core 23 on socket 0 00:04:42.460 EAL: Detected lcore 24 as core 24 on socket 0 00:04:42.460 EAL: Detected lcore 25 as core 25 on socket 0 00:04:42.460 EAL: Detected lcore 26 as core 26 on socket 0 00:04:42.460 EAL: Detected lcore 27 as core 27 on socket 0 00:04:42.460 EAL: Detected lcore 28 as core 28 on socket 0 00:04:42.460 EAL: Detected lcore 29 as core 29 on socket 0 00:04:42.460 EAL: Detected lcore 30 as core 30 on socket 0 00:04:42.460 EAL: Detected lcore 31 as core 31 on socket 0 00:04:42.460 EAL: Detected lcore 32 as core 32 on socket 0 00:04:42.460 EAL: Detected lcore 33 as core 33 on socket 0 00:04:42.460 EAL: Detected lcore 34 as core 34 on socket 0 00:04:42.460 EAL: Detected lcore 35 as core 35 on socket 0 00:04:42.460 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.460 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.460 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.460 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.460 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.460 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.460 EAL: Detected lcore 42 as core 6 on socket 1 00:04:42.460 EAL: Detected lcore 43 as core 7 on socket 1 00:04:42.460 EAL: Detected lcore 44 as core 8 on socket 1 00:04:42.460 EAL: Detected lcore 45 as core 9 on socket 1 00:04:42.460 EAL: Detected lcore 46 as core 10 on socket 1 00:04:42.460 EAL: Detected lcore 47 as core 11 on socket 1 00:04:42.460 EAL: Detected lcore 48 as core 12 on socket 1 00:04:42.460 EAL: Detected lcore 49 as core 13 on socket 1 00:04:42.460 EAL: Detected lcore 50 as core 14 on socket 1 00:04:42.460 EAL: Detected lcore 51 as core 15 on socket 1 00:04:42.460 EAL: Detected lcore 52 as core 16 on socket 1 00:04:42.460 EAL: Detected lcore 53 as core 17 on socket 1 00:04:42.460 EAL: Detected lcore 54 as core 18 on socket 1 00:04:42.460 EAL: Detected lcore 55 as core 19 on socket 1 00:04:42.460 EAL: Detected lcore 56 as core 20 on socket 1 00:04:42.460 EAL: Detected lcore 57 as core 21 on socket 1 00:04:42.460 EAL: Detected lcore 58 as core 22 on socket 1 00:04:42.460 EAL: Detected lcore 59 as core 23 on socket 1 00:04:42.460 EAL: Detected lcore 60 as core 24 on socket 1 00:04:42.460 EAL: Detected lcore 61 as core 25 on socket 1 00:04:42.460 EAL: Detected lcore 62 as core 26 on socket 1 00:04:42.460 EAL: Detected lcore 63 as core 27 on socket 1 00:04:42.460 EAL: Detected lcore 64 as core 28 on socket 1 00:04:42.460 EAL: Detected lcore 65 as core 29 on socket 1 00:04:42.460 EAL: Detected lcore 66 as core 30 on socket 1 00:04:42.460 EAL: Detected lcore 67 as core 31 on socket 1 00:04:42.460 EAL: Detected lcore 68 as core 32 on socket 1 00:04:42.460 EAL: Detected lcore 69 as core 33 on socket 1 00:04:42.460 EAL: Detected lcore 70 as core 34 on socket 1 00:04:42.460 EAL: Detected lcore 71 as core 35 on socket 1 00:04:42.460 EAL: Detected lcore 72 as core 0 on socket 0 00:04:42.460 EAL: Detected lcore 73 as core 1 on socket 0 00:04:42.460 EAL: Detected lcore 74 as core 2 on socket 0 00:04:42.460 EAL: Detected lcore 75 as core 3 on socket 0 00:04:42.460 EAL: Detected lcore 76 as core 4 on socket 0 00:04:42.460 EAL: Detected lcore 77 as core 5 on socket 0 00:04:42.460 EAL: Detected lcore 78 as core 6 on socket 0 00:04:42.460 EAL: Detected lcore 79 as core 7 on socket 0 00:04:42.460 EAL: Detected lcore 80 as core 8 on socket 0 00:04:42.460 EAL: Detected lcore 81 as core 9 on socket 0 00:04:42.460 EAL: Detected lcore 82 as core 10 on socket 0 00:04:42.460 EAL: Detected lcore 83 as core 11 on socket 0 00:04:42.460 EAL: Detected lcore 84 as core 12 on socket 0 00:04:42.460 EAL: Detected lcore 85 as core 13 on socket 0 00:04:42.460 EAL: Detected lcore 86 as core 14 on socket 0 00:04:42.460 EAL: Detected lcore 87 as core 15 on socket 0 00:04:42.460 EAL: Detected lcore 88 as core 16 on socket 0 00:04:42.460 EAL: Detected lcore 89 as core 17 on socket 0 00:04:42.460 EAL: Detected lcore 90 as core 18 on socket 0 00:04:42.460 EAL: Detected lcore 91 as core 19 on socket 0 00:04:42.461 EAL: Detected lcore 92 as core 20 on socket 0 00:04:42.461 EAL: Detected lcore 93 as core 21 on socket 0 00:04:42.461 EAL: Detected lcore 94 as core 22 on socket 0 00:04:42.461 EAL: Detected lcore 95 as core 23 on socket 0 00:04:42.461 EAL: Detected lcore 96 as core 24 on socket 0 00:04:42.461 EAL: Detected lcore 97 as core 25 on socket 0 00:04:42.461 EAL: Detected lcore 98 as core 26 on socket 0 00:04:42.461 EAL: Detected lcore 99 as core 27 on socket 0 00:04:42.461 EAL: Detected lcore 100 as core 28 on socket 0 00:04:42.461 EAL: Detected lcore 101 as core 29 on socket 0 00:04:42.461 EAL: Detected lcore 102 as core 30 on socket 0 00:04:42.461 EAL: Detected lcore 103 as core 31 on socket 0 00:04:42.461 EAL: Detected lcore 104 as core 32 on socket 0 00:04:42.461 EAL: Detected lcore 105 as core 33 on socket 0 00:04:42.461 EAL: Detected lcore 106 as core 34 on socket 0 00:04:42.461 EAL: Detected lcore 107 as core 35 on socket 0 00:04:42.461 EAL: Detected lcore 108 as core 0 on socket 1 00:04:42.461 EAL: Detected lcore 109 as core 1 on socket 1 00:04:42.461 EAL: Detected lcore 110 as core 2 on socket 1 00:04:42.461 EAL: Detected lcore 111 as core 3 on socket 1 00:04:42.461 EAL: Detected lcore 112 as core 4 on socket 1 00:04:42.461 EAL: Detected lcore 113 as core 5 on socket 1 00:04:42.461 EAL: Detected lcore 114 as core 6 on socket 1 00:04:42.461 EAL: Detected lcore 115 as core 7 on socket 1 00:04:42.461 EAL: Detected lcore 116 as core 8 on socket 1 00:04:42.461 EAL: Detected lcore 117 as core 9 on socket 1 00:04:42.461 EAL: Detected lcore 118 as core 10 on socket 1 00:04:42.461 EAL: Detected lcore 119 as core 11 on socket 1 00:04:42.461 EAL: Detected lcore 120 as core 12 on socket 1 00:04:42.461 EAL: Detected lcore 121 as core 13 on socket 1 00:04:42.461 EAL: Detected lcore 122 as core 14 on socket 1 00:04:42.461 EAL: Detected lcore 123 as core 15 on socket 1 00:04:42.461 EAL: Detected lcore 124 as core 16 on socket 1 00:04:42.461 EAL: Detected lcore 125 as core 17 on socket 1 00:04:42.461 EAL: Detected lcore 126 as core 18 on socket 1 00:04:42.461 EAL: Detected lcore 127 as core 19 on socket 1 00:04:42.461 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:42.461 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:42.461 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:42.461 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:42.461 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:42.461 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:42.461 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:42.461 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:42.461 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:42.461 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:42.461 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:42.461 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:42.461 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:42.461 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:42.461 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:42.461 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:42.461 EAL: Maximum logical cores by configuration: 128 00:04:42.461 EAL: Detected CPU lcores: 128 00:04:42.461 EAL: Detected NUMA nodes: 2 00:04:42.461 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.461 EAL: Detected shared linkage of DPDK 00:04:42.461 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.461 EAL: Bus pci wants IOVA as 'DC' 00:04:42.461 EAL: Buses did not request a specific IOVA mode. 00:04:42.461 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.461 EAL: Selected IOVA mode 'VA' 00:04:42.461 EAL: Probing VFIO support... 00:04:42.461 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.461 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.461 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.461 EAL: VFIO support initialized 00:04:42.461 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.461 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.461 EAL: Setting up physically contiguous memory... 00:04:42.461 EAL: Setting maximum number of open files to 524288 00:04:42.461 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.461 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.461 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.461 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.461 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.461 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.461 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.461 EAL: Hugepages will be freed exactly as allocated. 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: TSC frequency is ~2400000 KHz 00:04:42.461 EAL: Main lcore 0 is ready (tid=7f9d02b9fa00;cpuset=[0]) 00:04:42.461 EAL: Trying to obtain current memory policy. 00:04:42.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.461 EAL: Restoring previous memory policy: 0 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.461 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.461 00:04:42.461 00:04:42.461 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.461 http://cunit.sourceforge.net/ 00:04:42.461 00:04:42.461 00:04:42.461 Suite: components_suite 00:04:42.461 Test: vtophys_malloc_test ...passed 00:04:42.461 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.461 EAL: Restoring previous memory policy: 4 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.461 EAL: Trying to obtain current memory policy. 00:04:42.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.461 EAL: Restoring previous memory policy: 4 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.461 EAL: Trying to obtain current memory policy. 00:04:42.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.461 EAL: Restoring previous memory policy: 4 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.461 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.461 EAL: Trying to obtain current memory policy. 00:04:42.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.461 EAL: Restoring previous memory policy: 4 00:04:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.461 EAL: request: mp_malloc_sync 00:04:42.461 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.462 EAL: Trying to obtain current memory policy. 00:04:42.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.462 EAL: Restoring previous memory policy: 4 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.462 EAL: Trying to obtain current memory policy. 00:04:42.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.462 EAL: Restoring previous memory policy: 4 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.462 EAL: Trying to obtain current memory policy. 00:04:42.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.462 EAL: Restoring previous memory policy: 4 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.462 EAL: request: mp_malloc_sync 00:04:42.462 EAL: No shared files mode enabled, IPC is disabled 00:04:42.462 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.462 EAL: Trying to obtain current memory policy. 00:04:42.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.723 EAL: Restoring previous memory policy: 4 00:04:42.723 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.723 EAL: request: mp_malloc_sync 00:04:42.723 EAL: No shared files mode enabled, IPC is disabled 00:04:42.723 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.723 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.723 EAL: request: mp_malloc_sync 00:04:42.723 EAL: No shared files mode enabled, IPC is disabled 00:04:42.723 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.723 EAL: Trying to obtain current memory policy. 00:04:42.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.723 EAL: Restoring previous memory policy: 4 00:04:42.723 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.723 EAL: request: mp_malloc_sync 00:04:42.723 EAL: No shared files mode enabled, IPC is disabled 00:04:42.723 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.723 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.723 EAL: request: mp_malloc_sync 00:04:42.723 EAL: No shared files mode enabled, IPC is disabled 00:04:42.723 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.723 EAL: Trying to obtain current memory policy. 00:04:42.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.984 EAL: Restoring previous memory policy: 4 00:04:42.984 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.984 EAL: request: mp_malloc_sync 00:04:42.984 EAL: No shared files mode enabled, IPC is disabled 00:04:42.984 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.984 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.245 EAL: request: mp_malloc_sync 00:04:43.245 EAL: No shared files mode enabled, IPC is disabled 00:04:43.245 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.245 passed 00:04:43.245 00:04:43.245 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.245 suites 1 1 n/a 0 0 00:04:43.245 tests 2 2 2 0 0 00:04:43.245 asserts 497 497 497 0 n/a 00:04:43.245 00:04:43.245 Elapsed time = 0.687 seconds 00:04:43.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.245 EAL: request: mp_malloc_sync 00:04:43.245 EAL: No shared files mode enabled, IPC is disabled 00:04:43.245 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.245 EAL: No shared files mode enabled, IPC is disabled 00:04:43.245 EAL: No shared files mode enabled, IPC is disabled 00:04:43.245 EAL: No shared files mode enabled, IPC is disabled 00:04:43.245 00:04:43.245 real 0m0.834s 00:04:43.245 user 0m0.434s 00:04:43.245 sys 0m0.376s 00:04:43.245 15:14:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.245 15:14:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:43.245 ************************************ 00:04:43.245 END TEST env_vtophys 00:04:43.245 ************************************ 00:04:43.245 15:14:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.245 15:14:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.245 15:14:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.245 15:14:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.245 ************************************ 00:04:43.245 START TEST env_pci 00:04:43.245 ************************************ 00:04:43.245 15:14:32 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.245 00:04:43.245 00:04:43.245 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.245 http://cunit.sourceforge.net/ 00:04:43.245 00:04:43.245 00:04:43.245 Suite: pci 00:04:43.245 Test: pci_hook ...[2024-11-20 15:14:32.106877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 359278 has claimed it 00:04:43.245 EAL: Cannot find device (10000:00:01.0) 00:04:43.245 EAL: Failed to attach device on primary process 00:04:43.245 passed 00:04:43.245 00:04:43.245 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.245 suites 1 1 n/a 0 0 00:04:43.245 tests 1 1 1 0 0 00:04:43.245 asserts 25 25 25 0 n/a 00:04:43.245 00:04:43.245 Elapsed time = 0.030 seconds 00:04:43.245 00:04:43.245 real 0m0.052s 00:04:43.245 user 0m0.015s 00:04:43.245 sys 0m0.037s 00:04:43.245 15:14:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.245 15:14:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:43.245 ************************************ 00:04:43.245 END TEST env_pci 00:04:43.245 ************************************ 00:04:43.245 15:14:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.245 15:14:32 env -- env/env.sh@15 -- # uname 00:04:43.245 15:14:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.245 15:14:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.245 15:14:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.245 15:14:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:43.245 15:14:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.245 15:14:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.506 ************************************ 00:04:43.506 START TEST env_dpdk_post_init 00:04:43.506 ************************************ 00:04:43.506 15:14:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.506 EAL: Detected CPU lcores: 128 00:04:43.507 EAL: Detected NUMA nodes: 2 00:04:43.507 EAL: Detected shared linkage of DPDK 00:04:43.507 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.507 EAL: Selected IOVA mode 'VA' 00:04:43.507 EAL: VFIO support initialized 00:04:43.507 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.507 EAL: Using IOMMU type 1 (Type 1) 00:04:43.767 EAL: Ignore mapping IO port bar(1) 00:04:43.768 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:43.768 EAL: Ignore mapping IO port bar(1) 00:04:44.028 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:44.028 EAL: Ignore mapping IO port bar(1) 00:04:44.289 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:44.289 EAL: Ignore mapping IO port bar(1) 00:04:44.549 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:44.549 EAL: Ignore mapping IO port bar(1) 00:04:44.549 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:44.810 EAL: Ignore mapping IO port bar(1) 00:04:44.810 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:45.072 EAL: Ignore mapping IO port bar(1) 00:04:45.072 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:45.332 EAL: Ignore mapping IO port bar(1) 00:04:45.332 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:45.593 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:45.593 EAL: Ignore mapping IO port bar(1) 00:04:45.854 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:45.854 EAL: Ignore mapping IO port bar(1) 00:04:46.114 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:46.114 EAL: Ignore mapping IO port bar(1) 00:04:46.114 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:46.376 EAL: Ignore mapping IO port bar(1) 00:04:46.377 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:46.638 EAL: Ignore mapping IO port bar(1) 00:04:46.638 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:46.900 EAL: Ignore mapping IO port bar(1) 00:04:46.900 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:46.900 EAL: Ignore mapping IO port bar(1) 00:04:47.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:47.161 EAL: Ignore mapping IO port bar(1) 00:04:47.423 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:47.423 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:47.423 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:47.423 Starting DPDK initialization... 00:04:47.423 Starting SPDK post initialization... 00:04:47.423 SPDK NVMe probe 00:04:47.423 Attaching to 0000:65:00.0 00:04:47.423 Attached to 0000:65:00.0 00:04:47.423 Cleaning up... 00:04:49.338 00:04:49.338 real 0m5.747s 00:04:49.338 user 0m0.105s 00:04:49.338 sys 0m0.196s 00:04:49.338 15:14:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.338 15:14:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.338 END TEST env_dpdk_post_init 00:04:49.338 ************************************ 00:04:49.338 15:14:38 env -- env/env.sh@26 -- # uname 00:04:49.338 15:14:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.338 15:14:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.338 15:14:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.338 15:14:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.338 15:14:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.338 START TEST env_mem_callbacks 00:04:49.338 ************************************ 00:04:49.338 15:14:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.338 EAL: Detected CPU lcores: 128 00:04:49.338 EAL: Detected NUMA nodes: 2 00:04:49.338 EAL: Detected shared linkage of DPDK 00:04:49.338 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.338 EAL: Selected IOVA mode 'VA' 00:04:49.338 EAL: VFIO support initialized 00:04:49.338 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.338 00:04:49.338 00:04:49.338 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.338 http://cunit.sourceforge.net/ 00:04:49.338 00:04:49.338 00:04:49.338 Suite: memory 00:04:49.338 Test: test ... 00:04:49.338 register 0x200000200000 2097152 00:04:49.338 malloc 3145728 00:04:49.338 register 0x200000400000 4194304 00:04:49.338 buf 0x200000500000 len 3145728 PASSED 00:04:49.338 malloc 64 00:04:49.338 buf 0x2000004fff40 len 64 PASSED 00:04:49.338 malloc 4194304 00:04:49.338 register 0x200000800000 6291456 00:04:49.338 buf 0x200000a00000 len 4194304 PASSED 00:04:49.338 free 0x200000500000 3145728 00:04:49.338 free 0x2000004fff40 64 00:04:49.338 unregister 0x200000400000 4194304 PASSED 00:04:49.338 free 0x200000a00000 4194304 00:04:49.338 unregister 0x200000800000 6291456 PASSED 00:04:49.338 malloc 8388608 00:04:49.338 register 0x200000400000 10485760 00:04:49.338 buf 0x200000600000 len 8388608 PASSED 00:04:49.338 free 0x200000600000 8388608 00:04:49.338 unregister 0x200000400000 10485760 PASSED 00:04:49.338 passed 00:04:49.338 00:04:49.338 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.338 suites 1 1 n/a 0 0 00:04:49.338 tests 1 1 1 0 0 00:04:49.338 asserts 15 15 15 0 n/a 00:04:49.338 00:04:49.338 Elapsed time = 0.010 seconds 00:04:49.338 00:04:49.338 real 0m0.068s 00:04:49.338 user 0m0.019s 00:04:49.338 sys 0m0.048s 00:04:49.338 15:14:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.338 15:14:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.338 END TEST env_mem_callbacks 00:04:49.338 ************************************ 00:04:49.338 00:04:49.338 real 0m7.536s 00:04:49.338 user 0m1.055s 00:04:49.338 sys 0m1.046s 00:04:49.338 15:14:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.338 15:14:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.338 END TEST env 00:04:49.338 ************************************ 00:04:49.338 15:14:38 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.338 15:14:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.338 15:14:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.338 15:14:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.339 START TEST rpc 00:04:49.339 ************************************ 00:04:49.339 15:14:38 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.601 * Looking for test storage... 00:04:49.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.601 15:14:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.601 15:14:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.601 15:14:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.601 15:14:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.601 15:14:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.601 15:14:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.601 15:14:38 rpc -- scripts/common.sh@345 -- # : 1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.601 15:14:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.601 15:14:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.601 15:14:38 rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.601 15:14:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.601 15:14:38 rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.601 15:14:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.601 15:14:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.601 15:14:38 rpc -- scripts/common.sh@368 -- # return 0 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.601 --rc genhtml_branch_coverage=1 00:04:49.601 --rc genhtml_function_coverage=1 00:04:49.601 --rc genhtml_legend=1 00:04:49.601 --rc geninfo_all_blocks=1 00:04:49.601 --rc geninfo_unexecuted_blocks=1 00:04:49.601 00:04:49.601 ' 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.601 --rc genhtml_branch_coverage=1 00:04:49.601 --rc genhtml_function_coverage=1 00:04:49.601 --rc genhtml_legend=1 00:04:49.601 --rc geninfo_all_blocks=1 00:04:49.601 --rc geninfo_unexecuted_blocks=1 00:04:49.601 00:04:49.601 ' 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.601 --rc genhtml_branch_coverage=1 00:04:49.601 --rc genhtml_function_coverage=1 00:04:49.601 --rc genhtml_legend=1 00:04:49.601 --rc geninfo_all_blocks=1 00:04:49.601 --rc geninfo_unexecuted_blocks=1 00:04:49.601 00:04:49.601 ' 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.601 --rc genhtml_branch_coverage=1 00:04:49.601 --rc genhtml_function_coverage=1 00:04:49.601 --rc genhtml_legend=1 00:04:49.601 --rc geninfo_all_blocks=1 00:04:49.601 --rc geninfo_unexecuted_blocks=1 00:04:49.601 00:04:49.601 ' 00:04:49.601 15:14:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=360642 00:04:49.601 15:14:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.601 15:14:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.601 15:14:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 360642 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 360642 ']' 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.601 15:14:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.601 [2024-11-20 15:14:38.524737] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:04:49.601 [2024-11-20 15:14:38.524800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360642 ] 00:04:49.864 [2024-11-20 15:14:38.617736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.864 [2024-11-20 15:14:38.669552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.864 [2024-11-20 15:14:38.669607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 360642' to capture a snapshot of events at runtime. 00:04:49.864 [2024-11-20 15:14:38.669616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.864 [2024-11-20 15:14:38.669623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.864 [2024-11-20 15:14:38.669629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid360642 for offline analysis/debug. 00:04:49.864 [2024-11-20 15:14:38.670422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.438 15:14:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.438 15:14:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.438 15:14:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.438 15:14:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.438 15:14:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.438 15:14:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.438 15:14:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.438 15:14:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.438 15:14:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.438 ************************************ 00:04:50.438 START TEST rpc_integrity 00:04:50.438 ************************************ 00:04:50.438 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:50.438 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.438 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.438 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.699 { 00:04:50.699 "name": "Malloc0", 00:04:50.699 "aliases": [ 00:04:50.699 "9cb14c23-2ffa-4273-b9c3-4db74dfd5788" 00:04:50.699 ], 00:04:50.699 "product_name": "Malloc disk", 00:04:50.699 "block_size": 512, 00:04:50.699 "num_blocks": 16384, 00:04:50.699 "uuid": "9cb14c23-2ffa-4273-b9c3-4db74dfd5788", 00:04:50.699 "assigned_rate_limits": { 00:04:50.699 "rw_ios_per_sec": 0, 00:04:50.699 "rw_mbytes_per_sec": 0, 00:04:50.699 "r_mbytes_per_sec": 0, 00:04:50.699 "w_mbytes_per_sec": 0 00:04:50.699 }, 00:04:50.699 "claimed": false, 00:04:50.699 "zoned": false, 00:04:50.699 "supported_io_types": { 00:04:50.699 "read": true, 00:04:50.699 "write": true, 00:04:50.699 "unmap": true, 00:04:50.699 "flush": true, 00:04:50.699 "reset": true, 00:04:50.699 "nvme_admin": false, 00:04:50.699 "nvme_io": false, 00:04:50.699 "nvme_io_md": false, 00:04:50.699 "write_zeroes": true, 00:04:50.699 "zcopy": true, 00:04:50.699 "get_zone_info": false, 00:04:50.699 "zone_management": false, 00:04:50.699 "zone_append": false, 00:04:50.699 "compare": false, 00:04:50.699 "compare_and_write": false, 00:04:50.699 "abort": true, 00:04:50.699 "seek_hole": false, 00:04:50.699 "seek_data": false, 00:04:50.699 "copy": true, 00:04:50.699 "nvme_iov_md": false 00:04:50.699 }, 00:04:50.699 "memory_domains": [ 00:04:50.699 { 00:04:50.699 "dma_device_id": "system", 00:04:50.699 "dma_device_type": 1 00:04:50.699 }, 00:04:50.699 { 00:04:50.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.699 "dma_device_type": 2 00:04:50.699 } 00:04:50.699 ], 00:04:50.699 "driver_specific": {} 00:04:50.699 } 00:04:50.699 ]' 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.699 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.699 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.699 [2024-11-20 15:14:39.536869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.700 [2024-11-20 15:14:39.536919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.700 [2024-11-20 15:14:39.536935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1540800 00:04:50.700 [2024-11-20 15:14:39.536944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.700 [2024-11-20 15:14:39.538504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.700 [2024-11-20 15:14:39.538539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.700 Passthru0 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.700 { 00:04:50.700 "name": "Malloc0", 00:04:50.700 "aliases": [ 00:04:50.700 "9cb14c23-2ffa-4273-b9c3-4db74dfd5788" 00:04:50.700 ], 00:04:50.700 "product_name": "Malloc disk", 00:04:50.700 "block_size": 512, 00:04:50.700 "num_blocks": 16384, 00:04:50.700 "uuid": "9cb14c23-2ffa-4273-b9c3-4db74dfd5788", 00:04:50.700 "assigned_rate_limits": { 00:04:50.700 "rw_ios_per_sec": 0, 00:04:50.700 "rw_mbytes_per_sec": 0, 00:04:50.700 "r_mbytes_per_sec": 0, 00:04:50.700 "w_mbytes_per_sec": 0 00:04:50.700 }, 00:04:50.700 "claimed": true, 00:04:50.700 "claim_type": "exclusive_write", 00:04:50.700 "zoned": false, 00:04:50.700 "supported_io_types": { 00:04:50.700 "read": true, 00:04:50.700 "write": true, 00:04:50.700 "unmap": true, 00:04:50.700 "flush": true, 00:04:50.700 "reset": true, 00:04:50.700 "nvme_admin": false, 00:04:50.700 "nvme_io": false, 00:04:50.700 "nvme_io_md": false, 00:04:50.700 "write_zeroes": true, 00:04:50.700 "zcopy": true, 00:04:50.700 "get_zone_info": false, 00:04:50.700 "zone_management": false, 00:04:50.700 "zone_append": false, 00:04:50.700 "compare": false, 00:04:50.700 "compare_and_write": false, 00:04:50.700 "abort": true, 00:04:50.700 "seek_hole": false, 00:04:50.700 "seek_data": false, 00:04:50.700 "copy": true, 00:04:50.700 "nvme_iov_md": false 00:04:50.700 }, 00:04:50.700 "memory_domains": [ 00:04:50.700 { 00:04:50.700 "dma_device_id": "system", 00:04:50.700 "dma_device_type": 1 00:04:50.700 }, 00:04:50.700 { 00:04:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.700 "dma_device_type": 2 00:04:50.700 } 00:04:50.700 ], 00:04:50.700 "driver_specific": {} 00:04:50.700 }, 00:04:50.700 { 00:04:50.700 "name": "Passthru0", 00:04:50.700 "aliases": [ 00:04:50.700 "c148c004-1836-5ca7-b687-48fc2080e191" 00:04:50.700 ], 00:04:50.700 "product_name": "passthru", 00:04:50.700 "block_size": 512, 00:04:50.700 "num_blocks": 16384, 00:04:50.700 "uuid": "c148c004-1836-5ca7-b687-48fc2080e191", 00:04:50.700 "assigned_rate_limits": { 00:04:50.700 "rw_ios_per_sec": 0, 00:04:50.700 "rw_mbytes_per_sec": 0, 00:04:50.700 "r_mbytes_per_sec": 0, 00:04:50.700 "w_mbytes_per_sec": 0 00:04:50.700 }, 00:04:50.700 "claimed": false, 00:04:50.700 "zoned": false, 00:04:50.700 "supported_io_types": { 00:04:50.700 "read": true, 00:04:50.700 "write": true, 00:04:50.700 "unmap": true, 00:04:50.700 "flush": true, 00:04:50.700 "reset": true, 00:04:50.700 "nvme_admin": false, 00:04:50.700 "nvme_io": false, 00:04:50.700 "nvme_io_md": false, 00:04:50.700 "write_zeroes": true, 00:04:50.700 "zcopy": true, 00:04:50.700 "get_zone_info": false, 00:04:50.700 "zone_management": false, 00:04:50.700 "zone_append": false, 00:04:50.700 "compare": false, 00:04:50.700 "compare_and_write": false, 00:04:50.700 "abort": true, 00:04:50.700 "seek_hole": false, 00:04:50.700 "seek_data": false, 00:04:50.700 "copy": true, 00:04:50.700 "nvme_iov_md": false 00:04:50.700 }, 00:04:50.700 "memory_domains": [ 00:04:50.700 { 00:04:50.700 "dma_device_id": "system", 00:04:50.700 "dma_device_type": 1 00:04:50.700 }, 00:04:50.700 { 00:04:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.700 "dma_device_type": 2 00:04:50.700 } 00:04:50.700 ], 00:04:50.700 "driver_specific": { 00:04:50.700 "passthru": { 00:04:50.700 "name": "Passthru0", 00:04:50.700 "base_bdev_name": "Malloc0" 00:04:50.700 } 00:04:50.700 } 00:04:50.700 } 00:04:50.700 ]' 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.700 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.700 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.962 15:14:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.962 00:04:50.962 real 0m0.310s 00:04:50.962 user 0m0.187s 00:04:50.962 sys 0m0.047s 00:04:50.962 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.962 15:14:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 ************************************ 00:04:50.962 END TEST rpc_integrity 00:04:50.962 ************************************ 00:04:50.962 15:14:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.962 15:14:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.962 15:14:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.962 15:14:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 ************************************ 00:04:50.962 START TEST rpc_plugins 00:04:50.962 ************************************ 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.962 { 00:04:50.962 "name": "Malloc1", 00:04:50.962 "aliases": [ 00:04:50.962 "0fcfd5e2-e01f-4ea4-8e53-963ddfef9011" 00:04:50.962 ], 00:04:50.962 "product_name": "Malloc disk", 00:04:50.962 "block_size": 4096, 00:04:50.962 "num_blocks": 256, 00:04:50.962 "uuid": "0fcfd5e2-e01f-4ea4-8e53-963ddfef9011", 00:04:50.962 "assigned_rate_limits": { 00:04:50.962 "rw_ios_per_sec": 0, 00:04:50.962 "rw_mbytes_per_sec": 0, 00:04:50.962 "r_mbytes_per_sec": 0, 00:04:50.962 "w_mbytes_per_sec": 0 00:04:50.962 }, 00:04:50.962 "claimed": false, 00:04:50.962 "zoned": false, 00:04:50.962 "supported_io_types": { 00:04:50.962 "read": true, 00:04:50.962 "write": true, 00:04:50.962 "unmap": true, 00:04:50.962 "flush": true, 00:04:50.962 "reset": true, 00:04:50.962 "nvme_admin": false, 00:04:50.962 "nvme_io": false, 00:04:50.962 "nvme_io_md": false, 00:04:50.962 "write_zeroes": true, 00:04:50.962 "zcopy": true, 00:04:50.962 "get_zone_info": false, 00:04:50.962 "zone_management": false, 00:04:50.962 "zone_append": false, 00:04:50.962 "compare": false, 00:04:50.962 "compare_and_write": false, 00:04:50.962 "abort": true, 00:04:50.962 "seek_hole": false, 00:04:50.962 "seek_data": false, 00:04:50.962 "copy": true, 00:04:50.962 "nvme_iov_md": false 00:04:50.962 }, 00:04:50.962 "memory_domains": [ 00:04:50.962 { 00:04:50.962 "dma_device_id": "system", 00:04:50.962 "dma_device_type": 1 00:04:50.962 }, 00:04:50.962 { 00:04:50.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.962 "dma_device_type": 2 00:04:50.962 } 00:04:50.962 ], 00:04:50.962 "driver_specific": {} 00:04:50.962 } 00:04:50.962 ]' 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.962 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:50.962 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.224 15:14:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.224 00:04:51.224 real 0m0.152s 00:04:51.224 user 0m0.097s 00:04:51.224 sys 0m0.021s 00:04:51.224 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.224 15:14:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.224 ************************************ 00:04:51.224 END TEST rpc_plugins 00:04:51.224 ************************************ 00:04:51.224 15:14:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.224 15:14:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.224 15:14:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.224 15:14:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.224 ************************************ 00:04:51.224 START TEST rpc_trace_cmd_test 00:04:51.224 ************************************ 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.224 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid360642", 00:04:51.224 "tpoint_group_mask": "0x8", 00:04:51.224 "iscsi_conn": { 00:04:51.224 "mask": "0x2", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "scsi": { 00:04:51.224 "mask": "0x4", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "bdev": { 00:04:51.224 "mask": "0x8", 00:04:51.224 "tpoint_mask": "0xffffffffffffffff" 00:04:51.224 }, 00:04:51.224 "nvmf_rdma": { 00:04:51.224 "mask": "0x10", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "nvmf_tcp": { 00:04:51.224 "mask": "0x20", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "ftl": { 00:04:51.224 "mask": "0x40", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "blobfs": { 00:04:51.224 "mask": "0x80", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "dsa": { 00:04:51.224 "mask": "0x200", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "thread": { 00:04:51.224 "mask": "0x400", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "nvme_pcie": { 00:04:51.224 "mask": "0x800", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "iaa": { 00:04:51.224 "mask": "0x1000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "nvme_tcp": { 00:04:51.224 "mask": "0x2000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "bdev_nvme": { 00:04:51.224 "mask": "0x4000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "sock": { 00:04:51.224 "mask": "0x8000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "blob": { 00:04:51.224 "mask": "0x10000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "bdev_raid": { 00:04:51.224 "mask": "0x20000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 }, 00:04:51.224 "scheduler": { 00:04:51.224 "mask": "0x40000", 00:04:51.224 "tpoint_mask": "0x0" 00:04:51.224 } 00:04:51.224 }' 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.224 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.225 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.225 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.225 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.485 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.485 15:14:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.485 00:04:51.485 real 0m0.212s 00:04:51.485 user 0m0.169s 00:04:51.485 sys 0m0.036s 00:04:51.485 15:14:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.485 15:14:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 ************************************ 00:04:51.485 END TEST rpc_trace_cmd_test 00:04:51.485 ************************************ 00:04:51.485 15:14:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.485 15:14:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.485 15:14:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.485 15:14:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.485 15:14:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.485 15:14:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 ************************************ 00:04:51.485 START TEST rpc_daemon_integrity 00:04:51.485 ************************************ 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.485 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.485 { 00:04:51.485 "name": "Malloc2", 00:04:51.485 "aliases": [ 00:04:51.485 "b8136524-3e31-45e0-8efe-267bbc4b3838" 00:04:51.485 ], 00:04:51.485 "product_name": "Malloc disk", 00:04:51.485 "block_size": 512, 00:04:51.485 "num_blocks": 16384, 00:04:51.485 "uuid": "b8136524-3e31-45e0-8efe-267bbc4b3838", 00:04:51.485 "assigned_rate_limits": { 00:04:51.485 "rw_ios_per_sec": 0, 00:04:51.486 "rw_mbytes_per_sec": 0, 00:04:51.486 "r_mbytes_per_sec": 0, 00:04:51.486 "w_mbytes_per_sec": 0 00:04:51.486 }, 00:04:51.486 "claimed": false, 00:04:51.486 "zoned": false, 00:04:51.486 "supported_io_types": { 00:04:51.486 "read": true, 00:04:51.486 "write": true, 00:04:51.486 "unmap": true, 00:04:51.486 "flush": true, 00:04:51.486 "reset": true, 00:04:51.486 "nvme_admin": false, 00:04:51.486 "nvme_io": false, 00:04:51.486 "nvme_io_md": false, 00:04:51.486 "write_zeroes": true, 00:04:51.486 "zcopy": true, 00:04:51.486 "get_zone_info": false, 00:04:51.486 "zone_management": false, 00:04:51.486 "zone_append": false, 00:04:51.486 "compare": false, 00:04:51.486 "compare_and_write": false, 00:04:51.486 "abort": true, 00:04:51.486 "seek_hole": false, 00:04:51.486 "seek_data": false, 00:04:51.486 "copy": true, 00:04:51.486 "nvme_iov_md": false 00:04:51.486 }, 00:04:51.486 "memory_domains": [ 00:04:51.486 { 00:04:51.486 "dma_device_id": "system", 00:04:51.486 "dma_device_type": 1 00:04:51.486 }, 00:04:51.486 { 00:04:51.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.486 "dma_device_type": 2 00:04:51.486 } 00:04:51.486 ], 00:04:51.486 "driver_specific": {} 00:04:51.486 } 00:04:51.486 ]' 00:04:51.486 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 [2024-11-20 15:14:40.455584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.746 [2024-11-20 15:14:40.455628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.746 [2024-11-20 15:14:40.455646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13fcfe0 00:04:51.746 [2024-11-20 15:14:40.455654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.746 [2024-11-20 15:14:40.457114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.746 [2024-11-20 15:14:40.457149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.746 Passthru0 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.746 { 00:04:51.746 "name": "Malloc2", 00:04:51.746 "aliases": [ 00:04:51.746 "b8136524-3e31-45e0-8efe-267bbc4b3838" 00:04:51.746 ], 00:04:51.746 "product_name": "Malloc disk", 00:04:51.746 "block_size": 512, 00:04:51.746 "num_blocks": 16384, 00:04:51.746 "uuid": "b8136524-3e31-45e0-8efe-267bbc4b3838", 00:04:51.746 "assigned_rate_limits": { 00:04:51.746 "rw_ios_per_sec": 0, 00:04:51.746 "rw_mbytes_per_sec": 0, 00:04:51.746 "r_mbytes_per_sec": 0, 00:04:51.746 "w_mbytes_per_sec": 0 00:04:51.746 }, 00:04:51.746 "claimed": true, 00:04:51.746 "claim_type": "exclusive_write", 00:04:51.746 "zoned": false, 00:04:51.746 "supported_io_types": { 00:04:51.746 "read": true, 00:04:51.746 "write": true, 00:04:51.746 "unmap": true, 00:04:51.746 "flush": true, 00:04:51.746 "reset": true, 00:04:51.746 "nvme_admin": false, 00:04:51.746 "nvme_io": false, 00:04:51.746 "nvme_io_md": false, 00:04:51.746 "write_zeroes": true, 00:04:51.746 "zcopy": true, 00:04:51.746 "get_zone_info": false, 00:04:51.746 "zone_management": false, 00:04:51.746 "zone_append": false, 00:04:51.746 "compare": false, 00:04:51.746 "compare_and_write": false, 00:04:51.746 "abort": true, 00:04:51.746 "seek_hole": false, 00:04:51.746 "seek_data": false, 00:04:51.746 "copy": true, 00:04:51.746 "nvme_iov_md": false 00:04:51.746 }, 00:04:51.746 "memory_domains": [ 00:04:51.746 { 00:04:51.746 "dma_device_id": "system", 00:04:51.746 "dma_device_type": 1 00:04:51.746 }, 00:04:51.746 { 00:04:51.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.746 "dma_device_type": 2 00:04:51.746 } 00:04:51.746 ], 00:04:51.746 "driver_specific": {} 00:04:51.746 }, 00:04:51.746 { 00:04:51.746 "name": "Passthru0", 00:04:51.746 "aliases": [ 00:04:51.746 "d6032aeb-5a3d-5ac2-a1a7-6ad8a4f7caa9" 00:04:51.746 ], 00:04:51.746 "product_name": "passthru", 00:04:51.746 "block_size": 512, 00:04:51.746 "num_blocks": 16384, 00:04:51.746 "uuid": "d6032aeb-5a3d-5ac2-a1a7-6ad8a4f7caa9", 00:04:51.746 "assigned_rate_limits": { 00:04:51.746 "rw_ios_per_sec": 0, 00:04:51.746 "rw_mbytes_per_sec": 0, 00:04:51.746 "r_mbytes_per_sec": 0, 00:04:51.746 "w_mbytes_per_sec": 0 00:04:51.746 }, 00:04:51.746 "claimed": false, 00:04:51.746 "zoned": false, 00:04:51.746 "supported_io_types": { 00:04:51.746 "read": true, 00:04:51.746 "write": true, 00:04:51.746 "unmap": true, 00:04:51.746 "flush": true, 00:04:51.746 "reset": true, 00:04:51.746 "nvme_admin": false, 00:04:51.746 "nvme_io": false, 00:04:51.746 "nvme_io_md": false, 00:04:51.746 "write_zeroes": true, 00:04:51.746 "zcopy": true, 00:04:51.746 "get_zone_info": false, 00:04:51.746 "zone_management": false, 00:04:51.746 "zone_append": false, 00:04:51.746 "compare": false, 00:04:51.746 "compare_and_write": false, 00:04:51.746 "abort": true, 00:04:51.746 "seek_hole": false, 00:04:51.746 "seek_data": false, 00:04:51.746 "copy": true, 00:04:51.746 "nvme_iov_md": false 00:04:51.746 }, 00:04:51.746 "memory_domains": [ 00:04:51.746 { 00:04:51.746 "dma_device_id": "system", 00:04:51.746 "dma_device_type": 1 00:04:51.746 }, 00:04:51.746 { 00:04:51.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.746 "dma_device_type": 2 00:04:51.746 } 00:04:51.746 ], 00:04:51.746 "driver_specific": { 00:04:51.746 "passthru": { 00:04:51.746 "name": "Passthru0", 00:04:51.746 "base_bdev_name": "Malloc2" 00:04:51.746 } 00:04:51.746 } 00:04:51.746 } 00:04:51.746 ]' 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.746 00:04:51.746 real 0m0.306s 00:04:51.746 user 0m0.188s 00:04:51.746 sys 0m0.051s 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.746 15:14:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 ************************************ 00:04:51.746 END TEST rpc_daemon_integrity 00:04:51.746 ************************************ 00:04:51.746 15:14:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.746 15:14:40 rpc -- rpc/rpc.sh@84 -- # killprocess 360642 00:04:51.747 15:14:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 360642 ']' 00:04:51.747 15:14:40 rpc -- common/autotest_common.sh@958 -- # kill -0 360642 00:04:51.747 15:14:40 rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.747 15:14:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.747 15:14:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360642 00:04:52.007 15:14:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.007 15:14:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.007 15:14:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360642' 00:04:52.007 killing process with pid 360642 00:04:52.007 15:14:40 rpc -- common/autotest_common.sh@973 -- # kill 360642 00:04:52.007 15:14:40 rpc -- common/autotest_common.sh@978 -- # wait 360642 00:04:52.268 00:04:52.268 real 0m2.708s 00:04:52.268 user 0m3.417s 00:04:52.268 sys 0m0.860s 00:04:52.268 15:14:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.268 15:14:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.268 ************************************ 00:04:52.268 END TEST rpc 00:04:52.268 ************************************ 00:04:52.268 15:14:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.268 15:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.268 15:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.268 15:14:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.268 ************************************ 00:04:52.268 START TEST skip_rpc 00:04:52.268 ************************************ 00:04:52.268 15:14:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.268 * Looking for test storage... 00:04:52.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.268 15:14:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.268 15:14:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.268 15:14:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.529 15:14:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.529 15:14:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.530 15:14:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.530 --rc genhtml_branch_coverage=1 00:04:52.530 --rc genhtml_function_coverage=1 00:04:52.530 --rc genhtml_legend=1 00:04:52.530 --rc geninfo_all_blocks=1 00:04:52.530 --rc geninfo_unexecuted_blocks=1 00:04:52.530 00:04:52.530 ' 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.530 --rc genhtml_branch_coverage=1 00:04:52.530 --rc genhtml_function_coverage=1 00:04:52.530 --rc genhtml_legend=1 00:04:52.530 --rc geninfo_all_blocks=1 00:04:52.530 --rc geninfo_unexecuted_blocks=1 00:04:52.530 00:04:52.530 ' 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.530 --rc genhtml_branch_coverage=1 00:04:52.530 --rc genhtml_function_coverage=1 00:04:52.530 --rc genhtml_legend=1 00:04:52.530 --rc geninfo_all_blocks=1 00:04:52.530 --rc geninfo_unexecuted_blocks=1 00:04:52.530 00:04:52.530 ' 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.530 --rc genhtml_branch_coverage=1 00:04:52.530 --rc genhtml_function_coverage=1 00:04:52.530 --rc genhtml_legend=1 00:04:52.530 --rc geninfo_all_blocks=1 00:04:52.530 --rc geninfo_unexecuted_blocks=1 00:04:52.530 00:04:52.530 ' 00:04:52.530 15:14:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.530 15:14:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.530 15:14:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.530 15:14:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.530 ************************************ 00:04:52.530 START TEST skip_rpc 00:04:52.530 ************************************ 00:04:52.530 15:14:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:52.530 15:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=361492 00:04:52.530 15:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.530 15:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.530 15:14:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:52.530 [2024-11-20 15:14:41.355225] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:04:52.530 [2024-11-20 15:14:41.355283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361492 ] 00:04:52.530 [2024-11-20 15:14:41.448035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.791 [2024-11-20 15:14:41.501119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.078 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 361492 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 361492 ']' 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 361492 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361492 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361492' 00:04:58.079 killing process with pid 361492 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 361492 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 361492 00:04:58.079 00:04:58.079 real 0m5.262s 00:04:58.079 user 0m5.011s 00:04:58.079 sys 0m0.299s 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.079 15:14:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 ************************************ 00:04:58.079 END TEST skip_rpc 00:04:58.079 ************************************ 00:04:58.079 15:14:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.079 15:14:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.079 15:14:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.079 15:14:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 ************************************ 00:04:58.079 START TEST skip_rpc_with_json 00:04:58.079 ************************************ 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=362531 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 362531 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 362531 ']' 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.079 15:14:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 [2024-11-20 15:14:46.697942] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:04:58.079 [2024-11-20 15:14:46.697993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362531 ] 00:04:58.079 [2024-11-20 15:14:46.782762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.079 [2024-11-20 15:14:46.815439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.649 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.649 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:58.649 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.649 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.649 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.649 [2024-11-20 15:14:47.486383] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.649 request: 00:04:58.649 { 00:04:58.649 "trtype": "tcp", 00:04:58.650 "method": "nvmf_get_transports", 00:04:58.650 "req_id": 1 00:04:58.650 } 00:04:58.650 Got JSON-RPC error response 00:04:58.650 response: 00:04:58.650 { 00:04:58.650 "code": -19, 00:04:58.650 "message": "No such device" 00:04:58.650 } 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.650 [2024-11-20 15:14:47.498476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.650 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.910 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.910 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.910 { 00:04:58.910 "subsystems": [ 00:04:58.910 { 00:04:58.910 "subsystem": "fsdev", 00:04:58.910 "config": [ 00:04:58.910 { 00:04:58.910 "method": "fsdev_set_opts", 00:04:58.910 "params": { 00:04:58.910 "fsdev_io_pool_size": 65535, 00:04:58.910 "fsdev_io_cache_size": 256 00:04:58.910 } 00:04:58.910 } 00:04:58.910 ] 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "vfio_user_target", 00:04:58.910 "config": null 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "keyring", 00:04:58.910 "config": [] 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "iobuf", 00:04:58.910 "config": [ 00:04:58.910 { 00:04:58.910 "method": "iobuf_set_options", 00:04:58.910 "params": { 00:04:58.910 "small_pool_count": 8192, 00:04:58.910 "large_pool_count": 1024, 00:04:58.910 "small_bufsize": 8192, 00:04:58.910 "large_bufsize": 135168, 00:04:58.910 "enable_numa": false 00:04:58.910 } 00:04:58.910 } 00:04:58.910 ] 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "sock", 00:04:58.910 "config": [ 00:04:58.910 { 00:04:58.910 "method": "sock_set_default_impl", 00:04:58.910 "params": { 00:04:58.910 "impl_name": "posix" 00:04:58.910 } 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "method": "sock_impl_set_options", 00:04:58.910 "params": { 00:04:58.910 "impl_name": "ssl", 00:04:58.910 "recv_buf_size": 4096, 00:04:58.910 "send_buf_size": 4096, 00:04:58.910 "enable_recv_pipe": true, 00:04:58.910 "enable_quickack": false, 00:04:58.910 "enable_placement_id": 0, 00:04:58.910 "enable_zerocopy_send_server": true, 00:04:58.910 "enable_zerocopy_send_client": false, 00:04:58.910 "zerocopy_threshold": 0, 00:04:58.910 "tls_version": 0, 00:04:58.910 "enable_ktls": false 00:04:58.910 } 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "method": "sock_impl_set_options", 00:04:58.910 "params": { 00:04:58.910 "impl_name": "posix", 00:04:58.910 "recv_buf_size": 2097152, 00:04:58.910 "send_buf_size": 2097152, 00:04:58.910 "enable_recv_pipe": true, 00:04:58.910 "enable_quickack": false, 00:04:58.910 "enable_placement_id": 0, 00:04:58.910 "enable_zerocopy_send_server": true, 00:04:58.910 "enable_zerocopy_send_client": false, 00:04:58.910 "zerocopy_threshold": 0, 00:04:58.910 "tls_version": 0, 00:04:58.910 "enable_ktls": false 00:04:58.910 } 00:04:58.910 } 00:04:58.910 ] 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "vmd", 00:04:58.910 "config": [] 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "accel", 00:04:58.910 "config": [ 00:04:58.910 { 00:04:58.910 "method": "accel_set_options", 00:04:58.910 "params": { 00:04:58.910 "small_cache_size": 128, 00:04:58.910 "large_cache_size": 16, 00:04:58.910 "task_count": 2048, 00:04:58.910 "sequence_count": 2048, 00:04:58.910 "buf_count": 2048 00:04:58.910 } 00:04:58.910 } 00:04:58.910 ] 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "subsystem": "bdev", 00:04:58.910 "config": [ 00:04:58.910 { 00:04:58.910 "method": "bdev_set_options", 00:04:58.910 "params": { 00:04:58.910 "bdev_io_pool_size": 65535, 00:04:58.910 "bdev_io_cache_size": 256, 00:04:58.910 "bdev_auto_examine": true, 00:04:58.910 "iobuf_small_cache_size": 128, 00:04:58.910 "iobuf_large_cache_size": 16 00:04:58.910 } 00:04:58.910 }, 00:04:58.910 { 00:04:58.910 "method": "bdev_raid_set_options", 00:04:58.910 "params": { 00:04:58.911 "process_window_size_kb": 1024, 00:04:58.911 "process_max_bandwidth_mb_sec": 0 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "bdev_iscsi_set_options", 00:04:58.911 "params": { 00:04:58.911 "timeout_sec": 30 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "bdev_nvme_set_options", 00:04:58.911 "params": { 00:04:58.911 "action_on_timeout": "none", 00:04:58.911 "timeout_us": 0, 00:04:58.911 "timeout_admin_us": 0, 00:04:58.911 "keep_alive_timeout_ms": 10000, 00:04:58.911 "arbitration_burst": 0, 00:04:58.911 "low_priority_weight": 0, 00:04:58.911 "medium_priority_weight": 0, 00:04:58.911 "high_priority_weight": 0, 00:04:58.911 "nvme_adminq_poll_period_us": 10000, 00:04:58.911 "nvme_ioq_poll_period_us": 0, 00:04:58.911 "io_queue_requests": 0, 00:04:58.911 "delay_cmd_submit": true, 00:04:58.911 "transport_retry_count": 4, 00:04:58.911 "bdev_retry_count": 3, 00:04:58.911 "transport_ack_timeout": 0, 00:04:58.911 "ctrlr_loss_timeout_sec": 0, 00:04:58.911 "reconnect_delay_sec": 0, 00:04:58.911 "fast_io_fail_timeout_sec": 0, 00:04:58.911 "disable_auto_failback": false, 00:04:58.911 "generate_uuids": false, 00:04:58.911 "transport_tos": 0, 00:04:58.911 "nvme_error_stat": false, 00:04:58.911 "rdma_srq_size": 0, 00:04:58.911 "io_path_stat": false, 00:04:58.911 "allow_accel_sequence": false, 00:04:58.911 "rdma_max_cq_size": 0, 00:04:58.911 "rdma_cm_event_timeout_ms": 0, 00:04:58.911 "dhchap_digests": [ 00:04:58.911 "sha256", 00:04:58.911 "sha384", 00:04:58.911 "sha512" 00:04:58.911 ], 00:04:58.911 "dhchap_dhgroups": [ 00:04:58.911 "null", 00:04:58.911 "ffdhe2048", 00:04:58.911 "ffdhe3072", 00:04:58.911 "ffdhe4096", 00:04:58.911 "ffdhe6144", 00:04:58.911 "ffdhe8192" 00:04:58.911 ] 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "bdev_nvme_set_hotplug", 00:04:58.911 "params": { 00:04:58.911 "period_us": 100000, 00:04:58.911 "enable": false 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "bdev_wait_for_examine" 00:04:58.911 } 00:04:58.911 ] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "scsi", 00:04:58.911 "config": null 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "scheduler", 00:04:58.911 "config": [ 00:04:58.911 { 00:04:58.911 "method": "framework_set_scheduler", 00:04:58.911 "params": { 00:04:58.911 "name": "static" 00:04:58.911 } 00:04:58.911 } 00:04:58.911 ] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "vhost_scsi", 00:04:58.911 "config": [] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "vhost_blk", 00:04:58.911 "config": [] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "ublk", 00:04:58.911 "config": [] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "nbd", 00:04:58.911 "config": [] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "nvmf", 00:04:58.911 "config": [ 00:04:58.911 { 00:04:58.911 "method": "nvmf_set_config", 00:04:58.911 "params": { 00:04:58.911 "discovery_filter": "match_any", 00:04:58.911 "admin_cmd_passthru": { 00:04:58.911 "identify_ctrlr": false 00:04:58.911 }, 00:04:58.911 "dhchap_digests": [ 00:04:58.911 "sha256", 00:04:58.911 "sha384", 00:04:58.911 "sha512" 00:04:58.911 ], 00:04:58.911 "dhchap_dhgroups": [ 00:04:58.911 "null", 00:04:58.911 "ffdhe2048", 00:04:58.911 "ffdhe3072", 00:04:58.911 "ffdhe4096", 00:04:58.911 "ffdhe6144", 00:04:58.911 "ffdhe8192" 00:04:58.911 ] 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "nvmf_set_max_subsystems", 00:04:58.911 "params": { 00:04:58.911 "max_subsystems": 1024 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "nvmf_set_crdt", 00:04:58.911 "params": { 00:04:58.911 "crdt1": 0, 00:04:58.911 "crdt2": 0, 00:04:58.911 "crdt3": 0 00:04:58.911 } 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "method": "nvmf_create_transport", 00:04:58.911 "params": { 00:04:58.911 "trtype": "TCP", 00:04:58.911 "max_queue_depth": 128, 00:04:58.911 "max_io_qpairs_per_ctrlr": 127, 00:04:58.911 "in_capsule_data_size": 4096, 00:04:58.911 "max_io_size": 131072, 00:04:58.911 "io_unit_size": 131072, 00:04:58.911 "max_aq_depth": 128, 00:04:58.911 "num_shared_buffers": 511, 00:04:58.911 "buf_cache_size": 4294967295, 00:04:58.911 "dif_insert_or_strip": false, 00:04:58.911 "zcopy": false, 00:04:58.911 "c2h_success": true, 00:04:58.911 "sock_priority": 0, 00:04:58.911 "abort_timeout_sec": 1, 00:04:58.911 "ack_timeout": 0, 00:04:58.911 "data_wr_pool_size": 0 00:04:58.911 } 00:04:58.911 } 00:04:58.911 ] 00:04:58.911 }, 00:04:58.911 { 00:04:58.911 "subsystem": "iscsi", 00:04:58.911 "config": [ 00:04:58.911 { 00:04:58.911 "method": "iscsi_set_options", 00:04:58.911 "params": { 00:04:58.911 "node_base": "iqn.2016-06.io.spdk", 00:04:58.911 "max_sessions": 128, 00:04:58.911 "max_connections_per_session": 2, 00:04:58.911 "max_queue_depth": 64, 00:04:58.911 "default_time2wait": 2, 00:04:58.911 "default_time2retain": 20, 00:04:58.911 "first_burst_length": 8192, 00:04:58.911 "immediate_data": true, 00:04:58.911 "allow_duplicated_isid": false, 00:04:58.911 "error_recovery_level": 0, 00:04:58.911 "nop_timeout": 60, 00:04:58.911 "nop_in_interval": 30, 00:04:58.911 "disable_chap": false, 00:04:58.911 "require_chap": false, 00:04:58.911 "mutual_chap": false, 00:04:58.911 "chap_group": 0, 00:04:58.911 "max_large_datain_per_connection": 64, 00:04:58.911 "max_r2t_per_connection": 4, 00:04:58.911 "pdu_pool_size": 36864, 00:04:58.911 "immediate_data_pool_size": 16384, 00:04:58.911 "data_out_pool_size": 2048 00:04:58.911 } 00:04:58.911 } 00:04:58.911 ] 00:04:58.911 } 00:04:58.911 ] 00:04:58.911 } 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 362531 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 362531 ']' 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 362531 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362531 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362531' 00:04:58.911 killing process with pid 362531 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 362531 00:04:58.911 15:14:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 362531 00:04:59.171 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=362871 00:04:59.171 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.171 15:14:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 362871 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 362871 ']' 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 362871 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362871 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362871' 00:05:04.456 killing process with pid 362871 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 362871 00:05:04.456 15:14:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 362871 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.456 00:05:04.456 real 0m6.555s 00:05:04.456 user 0m6.479s 00:05:04.456 sys 0m0.550s 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.456 ************************************ 00:05:04.456 END TEST skip_rpc_with_json 00:05:04.456 ************************************ 00:05:04.456 15:14:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.456 15:14:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.456 15:14:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.456 15:14:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.456 ************************************ 00:05:04.456 START TEST skip_rpc_with_delay 00:05:04.456 ************************************ 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.456 [2024-11-20 15:14:53.327529] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.456 00:05:04.456 real 0m0.077s 00:05:04.456 user 0m0.049s 00:05:04.456 sys 0m0.027s 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.456 15:14:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.456 ************************************ 00:05:04.456 END TEST skip_rpc_with_delay 00:05:04.456 ************************************ 00:05:04.456 15:14:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:04.456 15:14:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:04.456 15:14:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:04.456 15:14:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.456 15:14:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.456 15:14:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.718 ************************************ 00:05:04.718 START TEST exit_on_failed_rpc_init 00:05:04.718 ************************************ 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=363942 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 363942 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 363942 ']' 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.718 15:14:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.718 [2024-11-20 15:14:53.484216] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:04.718 [2024-11-20 15:14:53.484265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363942 ] 00:05:04.718 [2024-11-20 15:14:53.567843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.718 [2024-11-20 15:14:53.599652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.659 [2024-11-20 15:14:54.330227] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:05.659 [2024-11-20 15:14:54.330279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364114 ] 00:05:05.659 [2024-11-20 15:14:54.416932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.659 [2024-11-20 15:14:54.452704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.659 [2024-11-20 15:14:54.452752] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.659 [2024-11-20 15:14:54.452762] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.659 [2024-11-20 15:14:54.452768] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 363942 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 363942 ']' 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 363942 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363942 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363942' 00:05:05.659 killing process with pid 363942 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 363942 00:05:05.659 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 363942 00:05:05.920 00:05:05.920 real 0m1.313s 00:05:05.920 user 0m1.539s 00:05:05.920 sys 0m0.372s 00:05:05.920 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.920 15:14:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.920 ************************************ 00:05:05.920 END TEST exit_on_failed_rpc_init 00:05:05.920 ************************************ 00:05:05.920 15:14:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.920 00:05:05.920 real 0m13.732s 00:05:05.920 user 0m13.294s 00:05:05.920 sys 0m1.588s 00:05:05.920 15:14:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.920 15:14:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.920 ************************************ 00:05:05.920 END TEST skip_rpc 00:05:05.920 ************************************ 00:05:05.920 15:14:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:05.920 15:14:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.920 15:14:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.920 15:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.920 ************************************ 00:05:05.920 START TEST rpc_client 00:05:05.920 ************************************ 00:05:05.920 15:14:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.181 * Looking for test storage... 00:05:06.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:06.181 15:14:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.181 15:14:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.181 15:14:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.181 15:14:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.181 --rc genhtml_branch_coverage=1 00:05:06.181 --rc genhtml_function_coverage=1 00:05:06.181 --rc genhtml_legend=1 00:05:06.181 --rc geninfo_all_blocks=1 00:05:06.181 --rc geninfo_unexecuted_blocks=1 00:05:06.181 00:05:06.181 ' 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.181 --rc genhtml_branch_coverage=1 00:05:06.181 --rc genhtml_function_coverage=1 00:05:06.181 --rc genhtml_legend=1 00:05:06.181 --rc geninfo_all_blocks=1 00:05:06.181 --rc geninfo_unexecuted_blocks=1 00:05:06.181 00:05:06.181 ' 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.181 --rc genhtml_branch_coverage=1 00:05:06.181 --rc genhtml_function_coverage=1 00:05:06.181 --rc genhtml_legend=1 00:05:06.181 --rc geninfo_all_blocks=1 00:05:06.181 --rc geninfo_unexecuted_blocks=1 00:05:06.181 00:05:06.181 ' 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.181 --rc genhtml_branch_coverage=1 00:05:06.181 --rc genhtml_function_coverage=1 00:05:06.181 --rc genhtml_legend=1 00:05:06.181 --rc geninfo_all_blocks=1 00:05:06.181 --rc geninfo_unexecuted_blocks=1 00:05:06.181 00:05:06.181 ' 00:05:06.181 15:14:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.181 OK 00:05:06.181 15:14:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.181 00:05:06.181 real 0m0.223s 00:05:06.181 user 0m0.138s 00:05:06.181 sys 0m0.099s 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.181 15:14:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.181 ************************************ 00:05:06.181 END TEST rpc_client 00:05:06.181 ************************************ 00:05:06.182 15:14:55 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.182 15:14:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.182 15:14:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.182 15:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:06.444 ************************************ 00:05:06.444 START TEST json_config 00:05:06.444 ************************************ 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.444 15:14:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.444 15:14:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.444 15:14:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.444 15:14:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.444 15:14:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.444 15:14:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:06.444 15:14:55 json_config -- scripts/common.sh@345 -- # : 1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.444 15:14:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.444 15:14:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@353 -- # local d=1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.444 15:14:55 json_config -- scripts/common.sh@355 -- # echo 1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.444 15:14:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@353 -- # local d=2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.444 15:14:55 json_config -- scripts/common.sh@355 -- # echo 2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.444 15:14:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.444 15:14:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.444 15:14:55 json_config -- scripts/common.sh@368 -- # return 0 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.444 --rc genhtml_branch_coverage=1 00:05:06.444 --rc genhtml_function_coverage=1 00:05:06.444 --rc genhtml_legend=1 00:05:06.444 --rc geninfo_all_blocks=1 00:05:06.444 --rc geninfo_unexecuted_blocks=1 00:05:06.444 00:05:06.444 ' 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.444 --rc genhtml_branch_coverage=1 00:05:06.444 --rc genhtml_function_coverage=1 00:05:06.444 --rc genhtml_legend=1 00:05:06.444 --rc geninfo_all_blocks=1 00:05:06.444 --rc geninfo_unexecuted_blocks=1 00:05:06.444 00:05:06.444 ' 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.444 --rc genhtml_branch_coverage=1 00:05:06.444 --rc genhtml_function_coverage=1 00:05:06.444 --rc genhtml_legend=1 00:05:06.444 --rc geninfo_all_blocks=1 00:05:06.444 --rc geninfo_unexecuted_blocks=1 00:05:06.444 00:05:06.444 ' 00:05:06.444 15:14:55 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.444 --rc genhtml_branch_coverage=1 00:05:06.444 --rc genhtml_function_coverage=1 00:05:06.444 --rc genhtml_legend=1 00:05:06.444 --rc geninfo_all_blocks=1 00:05:06.444 --rc geninfo_unexecuted_blocks=1 00:05:06.444 00:05:06.444 ' 00:05:06.444 15:14:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.444 15:14:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.444 15:14:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.444 15:14:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.444 15:14:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.444 15:14:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.445 15:14:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.445 15:14:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.445 15:14:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.445 15:14:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.445 15:14:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.445 15:14:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.445 15:14:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.445 15:14:55 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.445 15:14:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@51 -- # : 0 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.445 15:14:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:06.445 INFO: JSON configuration test init 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.445 15:14:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:06.445 15:14:55 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.445 15:14:55 json_config -- json_config/common.sh@10 -- # shift 00:05:06.445 15:14:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.445 15:14:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.445 15:14:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.445 15:14:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.445 15:14:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.445 15:14:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=364413 00:05:06.445 15:14:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.445 Waiting for target to run... 00:05:06.445 15:14:55 json_config -- json_config/common.sh@25 -- # waitforlisten 364413 /var/tmp/spdk_tgt.sock 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 364413 ']' 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.445 15:14:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.445 15:14:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.710 [2024-11-20 15:14:55.433915] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:06.710 [2024-11-20 15:14:55.433993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364413 ] 00:05:06.970 [2024-11-20 15:14:55.707850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.970 [2024-11-20 15:14:55.737396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.541 15:14:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.541 15:14:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:07.541 15:14:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.541 00:05:07.541 15:14:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:07.541 15:14:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:07.541 15:14:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.541 15:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.541 15:14:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:07.541 15:14:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:07.541 15:14:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.541 15:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.541 15:14:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:07.541 15:14:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:07.541 15:14:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.113 15:14:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.113 15:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:08.113 15:14:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:08.113 15:14:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@54 -- # sort 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:08.114 15:14:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:08.114 15:14:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.114 15:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:08.114 15:14:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.114 15:14:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:08.114 15:14:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.114 15:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.374 MallocForNvmf0 00:05:08.374 15:14:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.374 15:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.638 MallocForNvmf1 00:05:08.638 15:14:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.638 15:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.638 [2024-11-20 15:14:57.509189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.638 15:14:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.638 15:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.900 15:14:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.900 15:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.162 15:14:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.162 15:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.162 15:14:58 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.162 15:14:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.423 [2024-11-20 15:14:58.183273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.423 15:14:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:09.423 15:14:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.423 15:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.423 15:14:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:09.423 15:14:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.423 15:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.423 15:14:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:09.423 15:14:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.423 15:14:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.684 MallocBdevForConfigChangeCheck 00:05:09.684 15:14:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:09.684 15:14:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.684 15:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.684 15:14:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:09.684 15:14:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.945 15:14:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:09.945 INFO: shutting down applications... 00:05:09.945 15:14:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:09.945 15:14:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:09.945 15:14:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:09.945 15:14:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:10.515 Calling clear_iscsi_subsystem 00:05:10.515 Calling clear_nvmf_subsystem 00:05:10.515 Calling clear_nbd_subsystem 00:05:10.515 Calling clear_ublk_subsystem 00:05:10.515 Calling clear_vhost_blk_subsystem 00:05:10.515 Calling clear_vhost_scsi_subsystem 00:05:10.515 Calling clear_bdev_subsystem 00:05:10.515 15:14:59 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:10.515 15:14:59 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:10.515 15:14:59 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:10.515 15:14:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.515 15:14:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:10.515 15:14:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:10.774 15:14:59 json_config -- json_config/json_config.sh@352 -- # break 00:05:10.774 15:14:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:10.774 15:14:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:10.774 15:14:59 json_config -- json_config/common.sh@31 -- # local app=target 00:05:10.774 15:14:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.774 15:14:59 json_config -- json_config/common.sh@35 -- # [[ -n 364413 ]] 00:05:10.774 15:14:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 364413 00:05:10.774 15:14:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.774 15:14:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.774 15:14:59 json_config -- json_config/common.sh@41 -- # kill -0 364413 00:05:10.774 15:14:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.344 15:15:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.344 15:15:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.344 15:15:00 json_config -- json_config/common.sh@41 -- # kill -0 364413 00:05:11.344 15:15:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.344 15:15:00 json_config -- json_config/common.sh@43 -- # break 00:05:11.344 15:15:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.344 15:15:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.344 SPDK target shutdown done 00:05:11.344 15:15:00 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:11.344 INFO: relaunching applications... 00:05:11.344 15:15:00 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.344 15:15:00 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.344 15:15:00 json_config -- json_config/common.sh@10 -- # shift 00:05:11.344 15:15:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.344 15:15:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.344 15:15:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.344 15:15:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.344 15:15:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.344 15:15:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=365561 00:05:11.344 15:15:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.344 Waiting for target to run... 00:05:11.344 15:15:00 json_config -- json_config/common.sh@25 -- # waitforlisten 365561 /var/tmp/spdk_tgt.sock 00:05:11.344 15:15:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.344 15:15:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 365561 ']' 00:05:11.344 15:15:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.344 15:15:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.344 15:15:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.344 15:15:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.344 15:15:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 [2024-11-20 15:15:00.216873] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:11.344 [2024-11-20 15:15:00.216925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365561 ] 00:05:11.606 [2024-11-20 15:15:00.464505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.606 [2024-11-20 15:15:00.493238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.177 [2024-11-20 15:15:00.996259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.177 [2024-11-20 15:15:01.028643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:12.177 15:15:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.177 15:15:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:12.177 15:15:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:12.177 00:05:12.177 15:15:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:12.177 15:15:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:12.177 INFO: Checking if target configuration is the same... 00:05:12.177 15:15:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.177 15:15:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:12.177 15:15:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.177 + '[' 2 -ne 2 ']' 00:05:12.177 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.177 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.177 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.177 +++ basename /dev/fd/62 00:05:12.177 ++ mktemp /tmp/62.XXX 00:05:12.177 + tmp_file_1=/tmp/62.Uq3 00:05:12.177 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.177 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.177 + tmp_file_2=/tmp/spdk_tgt_config.json.mMB 00:05:12.177 + ret=0 00:05:12.177 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.437 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.697 + diff -u /tmp/62.Uq3 /tmp/spdk_tgt_config.json.mMB 00:05:12.697 + echo 'INFO: JSON config files are the same' 00:05:12.697 INFO: JSON config files are the same 00:05:12.697 + rm /tmp/62.Uq3 /tmp/spdk_tgt_config.json.mMB 00:05:12.697 + exit 0 00:05:12.697 15:15:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:12.697 15:15:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:12.697 INFO: changing configuration and checking if this can be detected... 00:05:12.697 15:15:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.697 15:15:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.698 15:15:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.698 15:15:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:12.698 15:15:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.698 + '[' 2 -ne 2 ']' 00:05:12.698 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.698 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.698 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.698 +++ basename /dev/fd/62 00:05:12.698 ++ mktemp /tmp/62.XXX 00:05:12.698 + tmp_file_1=/tmp/62.P9X 00:05:12.698 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.698 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.698 + tmp_file_2=/tmp/spdk_tgt_config.json.mYp 00:05:12.698 + ret=0 00:05:12.698 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.269 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.269 + diff -u /tmp/62.P9X /tmp/spdk_tgt_config.json.mYp 00:05:13.269 + ret=1 00:05:13.269 + echo '=== Start of file: /tmp/62.P9X ===' 00:05:13.269 + cat /tmp/62.P9X 00:05:13.269 + echo '=== End of file: /tmp/62.P9X ===' 00:05:13.269 + echo '' 00:05:13.269 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mYp ===' 00:05:13.269 + cat /tmp/spdk_tgt_config.json.mYp 00:05:13.269 + echo '=== End of file: /tmp/spdk_tgt_config.json.mYp ===' 00:05:13.269 + echo '' 00:05:13.269 + rm /tmp/62.P9X /tmp/spdk_tgt_config.json.mYp 00:05:13.269 + exit 1 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:13.269 INFO: configuration change detected. 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 365561 ]] 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 15:15:02 json_config -- json_config/json_config.sh@330 -- # killprocess 365561 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@954 -- # '[' -z 365561 ']' 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@958 -- # kill -0 365561 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@959 -- # uname 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365561 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365561' 00:05:13.269 killing process with pid 365561 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@973 -- # kill 365561 00:05:13.269 15:15:02 json_config -- common/autotest_common.sh@978 -- # wait 365561 00:05:13.529 15:15:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.529 15:15:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:13.529 15:15:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.529 15:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.529 15:15:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:13.530 15:15:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:13.530 INFO: Success 00:05:13.530 00:05:13.530 real 0m7.275s 00:05:13.530 user 0m8.924s 00:05:13.530 sys 0m1.829s 00:05:13.530 15:15:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.530 15:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.530 ************************************ 00:05:13.530 END TEST json_config 00:05:13.530 ************************************ 00:05:13.530 15:15:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.530 15:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.530 15:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.530 15:15:02 -- common/autotest_common.sh@10 -- # set +x 00:05:13.791 ************************************ 00:05:13.791 START TEST json_config_extra_key 00:05:13.791 ************************************ 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.791 15:15:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.791 --rc genhtml_branch_coverage=1 00:05:13.791 --rc genhtml_function_coverage=1 00:05:13.791 --rc genhtml_legend=1 00:05:13.791 --rc geninfo_all_blocks=1 00:05:13.791 --rc geninfo_unexecuted_blocks=1 00:05:13.791 00:05:13.791 ' 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.791 --rc genhtml_branch_coverage=1 00:05:13.791 --rc genhtml_function_coverage=1 00:05:13.791 --rc genhtml_legend=1 00:05:13.791 --rc geninfo_all_blocks=1 00:05:13.791 --rc geninfo_unexecuted_blocks=1 00:05:13.791 00:05:13.791 ' 00:05:13.791 15:15:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.791 --rc genhtml_branch_coverage=1 00:05:13.791 --rc genhtml_function_coverage=1 00:05:13.791 --rc genhtml_legend=1 00:05:13.791 --rc geninfo_all_blocks=1 00:05:13.791 --rc geninfo_unexecuted_blocks=1 00:05:13.791 00:05:13.791 ' 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.792 --rc genhtml_branch_coverage=1 00:05:13.792 --rc genhtml_function_coverage=1 00:05:13.792 --rc genhtml_legend=1 00:05:13.792 --rc geninfo_all_blocks=1 00:05:13.792 --rc geninfo_unexecuted_blocks=1 00:05:13.792 00:05:13.792 ' 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.792 15:15:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.792 15:15:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.792 15:15:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.792 15:15:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.792 15:15:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.792 15:15:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.792 15:15:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.792 15:15:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.792 15:15:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.792 15:15:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.792 INFO: launching applications... 00:05:13.792 15:15:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=366271 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.792 Waiting for target to run... 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 366271 /var/tmp/spdk_tgt.sock 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 366271 ']' 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.792 15:15:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.792 15:15:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.053 [2024-11-20 15:15:02.787182] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:14.053 [2024-11-20 15:15:02.787259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366271 ] 00:05:14.313 [2024-11-20 15:15:03.205825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.313 [2024-11-20 15:15:03.230694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.882 15:15:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.882 15:15:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:14.882 15:15:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.882 00:05:14.882 15:15:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.882 INFO: shutting down applications... 00:05:14.882 15:15:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.882 15:15:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.882 15:15:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.883 15:15:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 366271 ]] 00:05:14.883 15:15:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 366271 00:05:14.883 15:15:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.883 15:15:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.883 15:15:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 366271 00:05:14.883 15:15:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 366271 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.142 15:15:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.142 SPDK target shutdown done 00:05:15.142 15:15:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.142 Success 00:05:15.142 00:05:15.142 real 0m1.583s 00:05:15.142 user 0m1.070s 00:05:15.142 sys 0m0.547s 00:05:15.142 15:15:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.142 15:15:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.142 ************************************ 00:05:15.142 END TEST json_config_extra_key 00:05:15.142 ************************************ 00:05:15.403 15:15:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.403 15:15:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.403 15:15:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.403 15:15:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.403 ************************************ 00:05:15.403 START TEST alias_rpc 00:05:15.403 ************************************ 00:05:15.403 15:15:04 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.403 * Looking for test storage... 00:05:15.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:15.403 15:15:04 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.403 15:15:04 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.403 15:15:04 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.403 15:15:04 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.403 15:15:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.664 15:15:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.664 --rc genhtml_branch_coverage=1 00:05:15.664 --rc genhtml_function_coverage=1 00:05:15.664 --rc genhtml_legend=1 00:05:15.664 --rc geninfo_all_blocks=1 00:05:15.664 --rc geninfo_unexecuted_blocks=1 00:05:15.664 00:05:15.664 ' 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.664 --rc genhtml_branch_coverage=1 00:05:15.664 --rc genhtml_function_coverage=1 00:05:15.664 --rc genhtml_legend=1 00:05:15.664 --rc geninfo_all_blocks=1 00:05:15.664 --rc geninfo_unexecuted_blocks=1 00:05:15.664 00:05:15.664 ' 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.664 --rc genhtml_branch_coverage=1 00:05:15.664 --rc genhtml_function_coverage=1 00:05:15.664 --rc genhtml_legend=1 00:05:15.664 --rc geninfo_all_blocks=1 00:05:15.664 --rc geninfo_unexecuted_blocks=1 00:05:15.664 00:05:15.664 ' 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.664 --rc genhtml_branch_coverage=1 00:05:15.664 --rc genhtml_function_coverage=1 00:05:15.664 --rc genhtml_legend=1 00:05:15.664 --rc geninfo_all_blocks=1 00:05:15.664 --rc geninfo_unexecuted_blocks=1 00:05:15.664 00:05:15.664 ' 00:05:15.664 15:15:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.664 15:15:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=366612 00:05:15.664 15:15:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 366612 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 366612 ']' 00:05:15.664 15:15:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.664 15:15:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.664 [2024-11-20 15:15:04.442261] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:15.664 [2024-11-20 15:15:04.442336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366612 ] 00:05:15.664 [2024-11-20 15:15:04.529155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.664 [2024-11-20 15:15:04.568660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.604 15:15:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.604 15:15:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 366612 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 366612 ']' 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 366612 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366612 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366612' 00:05:16.604 killing process with pid 366612 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 366612 00:05:16.604 15:15:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 366612 00:05:16.865 00:05:16.865 real 0m1.521s 00:05:16.865 user 0m1.666s 00:05:16.865 sys 0m0.433s 00:05:16.865 15:15:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.865 15:15:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.865 ************************************ 00:05:16.865 END TEST alias_rpc 00:05:16.865 ************************************ 00:05:16.865 15:15:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:16.865 15:15:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.865 15:15:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.865 15:15:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.865 15:15:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.865 ************************************ 00:05:16.865 START TEST spdkcli_tcp 00:05:16.865 ************************************ 00:05:16.865 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.126 * Looking for test storage... 00:05:17.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.126 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.126 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.126 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.126 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.126 15:15:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.127 15:15:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.127 --rc genhtml_branch_coverage=1 00:05:17.127 --rc genhtml_function_coverage=1 00:05:17.127 --rc genhtml_legend=1 00:05:17.127 --rc geninfo_all_blocks=1 00:05:17.127 --rc geninfo_unexecuted_blocks=1 00:05:17.127 00:05:17.127 ' 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.127 --rc genhtml_branch_coverage=1 00:05:17.127 --rc genhtml_function_coverage=1 00:05:17.127 --rc genhtml_legend=1 00:05:17.127 --rc geninfo_all_blocks=1 00:05:17.127 --rc geninfo_unexecuted_blocks=1 00:05:17.127 00:05:17.127 ' 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.127 --rc genhtml_branch_coverage=1 00:05:17.127 --rc genhtml_function_coverage=1 00:05:17.127 --rc genhtml_legend=1 00:05:17.127 --rc geninfo_all_blocks=1 00:05:17.127 --rc geninfo_unexecuted_blocks=1 00:05:17.127 00:05:17.127 ' 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.127 --rc genhtml_branch_coverage=1 00:05:17.127 --rc genhtml_function_coverage=1 00:05:17.127 --rc genhtml_legend=1 00:05:17.127 --rc geninfo_all_blocks=1 00:05:17.127 --rc geninfo_unexecuted_blocks=1 00:05:17.127 00:05:17.127 ' 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=366952 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 366952 00:05:17.127 15:15:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 366952 ']' 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.127 15:15:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.127 [2024-11-20 15:15:06.047881] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:17.127 [2024-11-20 15:15:06.047957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366952 ] 00:05:17.387 [2024-11-20 15:15:06.137040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.387 [2024-11-20 15:15:06.173214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.387 [2024-11-20 15:15:06.173232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.957 15:15:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.957 15:15:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:17.957 15:15:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=367250 00:05:17.957 15:15:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:17.957 15:15:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.218 [ 00:05:18.218 "bdev_malloc_delete", 00:05:18.218 "bdev_malloc_create", 00:05:18.218 "bdev_null_resize", 00:05:18.218 "bdev_null_delete", 00:05:18.218 "bdev_null_create", 00:05:18.218 "bdev_nvme_cuse_unregister", 00:05:18.218 "bdev_nvme_cuse_register", 00:05:18.218 "bdev_opal_new_user", 00:05:18.218 "bdev_opal_set_lock_state", 00:05:18.218 "bdev_opal_delete", 00:05:18.218 "bdev_opal_get_info", 00:05:18.218 "bdev_opal_create", 00:05:18.218 "bdev_nvme_opal_revert", 00:05:18.218 "bdev_nvme_opal_init", 00:05:18.218 "bdev_nvme_send_cmd", 00:05:18.218 "bdev_nvme_set_keys", 00:05:18.218 "bdev_nvme_get_path_iostat", 00:05:18.219 "bdev_nvme_get_mdns_discovery_info", 00:05:18.219 "bdev_nvme_stop_mdns_discovery", 00:05:18.219 "bdev_nvme_start_mdns_discovery", 00:05:18.219 "bdev_nvme_set_multipath_policy", 00:05:18.219 "bdev_nvme_set_preferred_path", 00:05:18.219 "bdev_nvme_get_io_paths", 00:05:18.219 "bdev_nvme_remove_error_injection", 00:05:18.219 "bdev_nvme_add_error_injection", 00:05:18.219 "bdev_nvme_get_discovery_info", 00:05:18.219 "bdev_nvme_stop_discovery", 00:05:18.219 "bdev_nvme_start_discovery", 00:05:18.219 "bdev_nvme_get_controller_health_info", 00:05:18.219 "bdev_nvme_disable_controller", 00:05:18.219 "bdev_nvme_enable_controller", 00:05:18.219 "bdev_nvme_reset_controller", 00:05:18.219 "bdev_nvme_get_transport_statistics", 00:05:18.219 "bdev_nvme_apply_firmware", 00:05:18.219 "bdev_nvme_detach_controller", 00:05:18.219 "bdev_nvme_get_controllers", 00:05:18.219 "bdev_nvme_attach_controller", 00:05:18.219 "bdev_nvme_set_hotplug", 00:05:18.219 "bdev_nvme_set_options", 00:05:18.219 "bdev_passthru_delete", 00:05:18.219 "bdev_passthru_create", 00:05:18.219 "bdev_lvol_set_parent_bdev", 00:05:18.219 "bdev_lvol_set_parent", 00:05:18.219 "bdev_lvol_check_shallow_copy", 00:05:18.219 "bdev_lvol_start_shallow_copy", 00:05:18.219 "bdev_lvol_grow_lvstore", 00:05:18.219 "bdev_lvol_get_lvols", 00:05:18.219 "bdev_lvol_get_lvstores", 00:05:18.219 "bdev_lvol_delete", 00:05:18.219 "bdev_lvol_set_read_only", 00:05:18.219 "bdev_lvol_resize", 00:05:18.219 "bdev_lvol_decouple_parent", 00:05:18.219 "bdev_lvol_inflate", 00:05:18.219 "bdev_lvol_rename", 00:05:18.219 "bdev_lvol_clone_bdev", 00:05:18.219 "bdev_lvol_clone", 00:05:18.219 "bdev_lvol_snapshot", 00:05:18.219 "bdev_lvol_create", 00:05:18.219 "bdev_lvol_delete_lvstore", 00:05:18.219 "bdev_lvol_rename_lvstore", 00:05:18.219 "bdev_lvol_create_lvstore", 00:05:18.219 "bdev_raid_set_options", 00:05:18.219 "bdev_raid_remove_base_bdev", 00:05:18.219 "bdev_raid_add_base_bdev", 00:05:18.219 "bdev_raid_delete", 00:05:18.219 "bdev_raid_create", 00:05:18.219 "bdev_raid_get_bdevs", 00:05:18.219 "bdev_error_inject_error", 00:05:18.219 "bdev_error_delete", 00:05:18.219 "bdev_error_create", 00:05:18.219 "bdev_split_delete", 00:05:18.219 "bdev_split_create", 00:05:18.219 "bdev_delay_delete", 00:05:18.219 "bdev_delay_create", 00:05:18.219 "bdev_delay_update_latency", 00:05:18.219 "bdev_zone_block_delete", 00:05:18.219 "bdev_zone_block_create", 00:05:18.219 "blobfs_create", 00:05:18.219 "blobfs_detect", 00:05:18.219 "blobfs_set_cache_size", 00:05:18.219 "bdev_aio_delete", 00:05:18.219 "bdev_aio_rescan", 00:05:18.219 "bdev_aio_create", 00:05:18.219 "bdev_ftl_set_property", 00:05:18.219 "bdev_ftl_get_properties", 00:05:18.219 "bdev_ftl_get_stats", 00:05:18.219 "bdev_ftl_unmap", 00:05:18.219 "bdev_ftl_unload", 00:05:18.219 "bdev_ftl_delete", 00:05:18.219 "bdev_ftl_load", 00:05:18.219 "bdev_ftl_create", 00:05:18.219 "bdev_virtio_attach_controller", 00:05:18.219 "bdev_virtio_scsi_get_devices", 00:05:18.219 "bdev_virtio_detach_controller", 00:05:18.219 "bdev_virtio_blk_set_hotplug", 00:05:18.219 "bdev_iscsi_delete", 00:05:18.219 "bdev_iscsi_create", 00:05:18.219 "bdev_iscsi_set_options", 00:05:18.219 "accel_error_inject_error", 00:05:18.219 "ioat_scan_accel_module", 00:05:18.219 "dsa_scan_accel_module", 00:05:18.219 "iaa_scan_accel_module", 00:05:18.219 "vfu_virtio_create_fs_endpoint", 00:05:18.219 "vfu_virtio_create_scsi_endpoint", 00:05:18.219 "vfu_virtio_scsi_remove_target", 00:05:18.219 "vfu_virtio_scsi_add_target", 00:05:18.219 "vfu_virtio_create_blk_endpoint", 00:05:18.219 "vfu_virtio_delete_endpoint", 00:05:18.219 "keyring_file_remove_key", 00:05:18.219 "keyring_file_add_key", 00:05:18.219 "keyring_linux_set_options", 00:05:18.219 "fsdev_aio_delete", 00:05:18.219 "fsdev_aio_create", 00:05:18.219 "iscsi_get_histogram", 00:05:18.219 "iscsi_enable_histogram", 00:05:18.219 "iscsi_set_options", 00:05:18.219 "iscsi_get_auth_groups", 00:05:18.219 "iscsi_auth_group_remove_secret", 00:05:18.219 "iscsi_auth_group_add_secret", 00:05:18.219 "iscsi_delete_auth_group", 00:05:18.219 "iscsi_create_auth_group", 00:05:18.219 "iscsi_set_discovery_auth", 00:05:18.219 "iscsi_get_options", 00:05:18.219 "iscsi_target_node_request_logout", 00:05:18.219 "iscsi_target_node_set_redirect", 00:05:18.219 "iscsi_target_node_set_auth", 00:05:18.219 "iscsi_target_node_add_lun", 00:05:18.219 "iscsi_get_stats", 00:05:18.219 "iscsi_get_connections", 00:05:18.219 "iscsi_portal_group_set_auth", 00:05:18.219 "iscsi_start_portal_group", 00:05:18.219 "iscsi_delete_portal_group", 00:05:18.219 "iscsi_create_portal_group", 00:05:18.219 "iscsi_get_portal_groups", 00:05:18.219 "iscsi_delete_target_node", 00:05:18.219 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.219 "iscsi_target_node_add_pg_ig_maps", 00:05:18.219 "iscsi_create_target_node", 00:05:18.219 "iscsi_get_target_nodes", 00:05:18.219 "iscsi_delete_initiator_group", 00:05:18.219 "iscsi_initiator_group_remove_initiators", 00:05:18.219 "iscsi_initiator_group_add_initiators", 00:05:18.219 "iscsi_create_initiator_group", 00:05:18.219 "iscsi_get_initiator_groups", 00:05:18.219 "nvmf_set_crdt", 00:05:18.219 "nvmf_set_config", 00:05:18.219 "nvmf_set_max_subsystems", 00:05:18.219 "nvmf_stop_mdns_prr", 00:05:18.219 "nvmf_publish_mdns_prr", 00:05:18.219 "nvmf_subsystem_get_listeners", 00:05:18.219 "nvmf_subsystem_get_qpairs", 00:05:18.219 "nvmf_subsystem_get_controllers", 00:05:18.219 "nvmf_get_stats", 00:05:18.219 "nvmf_get_transports", 00:05:18.219 "nvmf_create_transport", 00:05:18.219 "nvmf_get_targets", 00:05:18.219 "nvmf_delete_target", 00:05:18.219 "nvmf_create_target", 00:05:18.219 "nvmf_subsystem_allow_any_host", 00:05:18.219 "nvmf_subsystem_set_keys", 00:05:18.219 "nvmf_subsystem_remove_host", 00:05:18.219 "nvmf_subsystem_add_host", 00:05:18.219 "nvmf_ns_remove_host", 00:05:18.219 "nvmf_ns_add_host", 00:05:18.219 "nvmf_subsystem_remove_ns", 00:05:18.219 "nvmf_subsystem_set_ns_ana_group", 00:05:18.219 "nvmf_subsystem_add_ns", 00:05:18.219 "nvmf_subsystem_listener_set_ana_state", 00:05:18.219 "nvmf_discovery_get_referrals", 00:05:18.219 "nvmf_discovery_remove_referral", 00:05:18.219 "nvmf_discovery_add_referral", 00:05:18.219 "nvmf_subsystem_remove_listener", 00:05:18.219 "nvmf_subsystem_add_listener", 00:05:18.219 "nvmf_delete_subsystem", 00:05:18.219 "nvmf_create_subsystem", 00:05:18.219 "nvmf_get_subsystems", 00:05:18.219 "env_dpdk_get_mem_stats", 00:05:18.219 "nbd_get_disks", 00:05:18.219 "nbd_stop_disk", 00:05:18.219 "nbd_start_disk", 00:05:18.219 "ublk_recover_disk", 00:05:18.219 "ublk_get_disks", 00:05:18.219 "ublk_stop_disk", 00:05:18.219 "ublk_start_disk", 00:05:18.219 "ublk_destroy_target", 00:05:18.219 "ublk_create_target", 00:05:18.219 "virtio_blk_create_transport", 00:05:18.219 "virtio_blk_get_transports", 00:05:18.219 "vhost_controller_set_coalescing", 00:05:18.219 "vhost_get_controllers", 00:05:18.219 "vhost_delete_controller", 00:05:18.219 "vhost_create_blk_controller", 00:05:18.219 "vhost_scsi_controller_remove_target", 00:05:18.219 "vhost_scsi_controller_add_target", 00:05:18.219 "vhost_start_scsi_controller", 00:05:18.219 "vhost_create_scsi_controller", 00:05:18.219 "thread_set_cpumask", 00:05:18.219 "scheduler_set_options", 00:05:18.219 "framework_get_governor", 00:05:18.219 "framework_get_scheduler", 00:05:18.219 "framework_set_scheduler", 00:05:18.219 "framework_get_reactors", 00:05:18.219 "thread_get_io_channels", 00:05:18.219 "thread_get_pollers", 00:05:18.219 "thread_get_stats", 00:05:18.219 "framework_monitor_context_switch", 00:05:18.219 "spdk_kill_instance", 00:05:18.219 "log_enable_timestamps", 00:05:18.219 "log_get_flags", 00:05:18.219 "log_clear_flag", 00:05:18.219 "log_set_flag", 00:05:18.219 "log_get_level", 00:05:18.219 "log_set_level", 00:05:18.219 "log_get_print_level", 00:05:18.219 "log_set_print_level", 00:05:18.219 "framework_enable_cpumask_locks", 00:05:18.219 "framework_disable_cpumask_locks", 00:05:18.219 "framework_wait_init", 00:05:18.219 "framework_start_init", 00:05:18.219 "scsi_get_devices", 00:05:18.219 "bdev_get_histogram", 00:05:18.219 "bdev_enable_histogram", 00:05:18.219 "bdev_set_qos_limit", 00:05:18.219 "bdev_set_qd_sampling_period", 00:05:18.219 "bdev_get_bdevs", 00:05:18.219 "bdev_reset_iostat", 00:05:18.219 "bdev_get_iostat", 00:05:18.219 "bdev_examine", 00:05:18.219 "bdev_wait_for_examine", 00:05:18.219 "bdev_set_options", 00:05:18.219 "accel_get_stats", 00:05:18.219 "accel_set_options", 00:05:18.219 "accel_set_driver", 00:05:18.219 "accel_crypto_key_destroy", 00:05:18.219 "accel_crypto_keys_get", 00:05:18.219 "accel_crypto_key_create", 00:05:18.219 "accel_assign_opc", 00:05:18.219 "accel_get_module_info", 00:05:18.219 "accel_get_opc_assignments", 00:05:18.219 "vmd_rescan", 00:05:18.219 "vmd_remove_device", 00:05:18.219 "vmd_enable", 00:05:18.219 "sock_get_default_impl", 00:05:18.219 "sock_set_default_impl", 00:05:18.219 "sock_impl_set_options", 00:05:18.219 "sock_impl_get_options", 00:05:18.219 "iobuf_get_stats", 00:05:18.219 "iobuf_set_options", 00:05:18.219 "keyring_get_keys", 00:05:18.219 "vfu_tgt_set_base_path", 00:05:18.219 "framework_get_pci_devices", 00:05:18.219 "framework_get_config", 00:05:18.219 "framework_get_subsystems", 00:05:18.219 "fsdev_set_opts", 00:05:18.220 "fsdev_get_opts", 00:05:18.220 "trace_get_info", 00:05:18.220 "trace_get_tpoint_group_mask", 00:05:18.220 "trace_disable_tpoint_group", 00:05:18.220 "trace_enable_tpoint_group", 00:05:18.220 "trace_clear_tpoint_mask", 00:05:18.220 "trace_set_tpoint_mask", 00:05:18.220 "notify_get_notifications", 00:05:18.220 "notify_get_types", 00:05:18.220 "spdk_get_version", 00:05:18.220 "rpc_get_methods" 00:05:18.220 ] 00:05:18.220 15:15:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.220 15:15:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.220 15:15:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.220 15:15:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.220 15:15:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 366952 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 366952 ']' 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 366952 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366952 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366952' 00:05:18.220 killing process with pid 366952 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 366952 00:05:18.220 15:15:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 366952 00:05:18.482 00:05:18.482 real 0m1.507s 00:05:18.482 user 0m2.685s 00:05:18.482 sys 0m0.484s 00:05:18.482 15:15:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.482 15:15:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.482 ************************************ 00:05:18.482 END TEST spdkcli_tcp 00:05:18.482 ************************************ 00:05:18.482 15:15:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.482 15:15:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.482 15:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.482 15:15:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.482 ************************************ 00:05:18.482 START TEST dpdk_mem_utility 00:05:18.482 ************************************ 00:05:18.482 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.743 * Looking for test storage... 00:05:18.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.743 15:15:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.743 --rc genhtml_branch_coverage=1 00:05:18.743 --rc genhtml_function_coverage=1 00:05:18.743 --rc genhtml_legend=1 00:05:18.743 --rc geninfo_all_blocks=1 00:05:18.743 --rc geninfo_unexecuted_blocks=1 00:05:18.743 00:05:18.743 ' 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.743 --rc genhtml_branch_coverage=1 00:05:18.743 --rc genhtml_function_coverage=1 00:05:18.743 --rc genhtml_legend=1 00:05:18.743 --rc geninfo_all_blocks=1 00:05:18.743 --rc geninfo_unexecuted_blocks=1 00:05:18.743 00:05:18.743 ' 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.743 --rc genhtml_branch_coverage=1 00:05:18.743 --rc genhtml_function_coverage=1 00:05:18.743 --rc genhtml_legend=1 00:05:18.743 --rc geninfo_all_blocks=1 00:05:18.743 --rc geninfo_unexecuted_blocks=1 00:05:18.743 00:05:18.743 ' 00:05:18.743 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.743 --rc genhtml_branch_coverage=1 00:05:18.743 --rc genhtml_function_coverage=1 00:05:18.743 --rc genhtml_legend=1 00:05:18.743 --rc geninfo_all_blocks=1 00:05:18.743 --rc geninfo_unexecuted_blocks=1 00:05:18.743 00:05:18.743 ' 00:05:18.743 15:15:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.743 15:15:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=367326 00:05:18.744 15:15:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 367326 00:05:18.744 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 367326 ']' 00:05:18.744 15:15:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.744 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.744 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.744 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.744 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.744 15:15:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.744 [2024-11-20 15:15:07.609199] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:18.744 [2024-11-20 15:15:07.609275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367326 ] 00:05:18.744 [2024-11-20 15:15:07.699842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.063 [2024-11-20 15:15:07.736211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.742 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.742 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:19.742 15:15:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:19.742 15:15:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:19.742 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.742 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.742 { 00:05:19.742 "filename": "/tmp/spdk_mem_dump.txt" 00:05:19.742 } 00:05:19.742 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.742 15:15:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.742 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:19.742 1 heaps totaling size 810.000000 MiB 00:05:19.742 size: 810.000000 MiB heap id: 0 00:05:19.742 end heaps---------- 00:05:19.742 9 mempools totaling size 595.772034 MiB 00:05:19.742 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.742 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.742 size: 92.545471 MiB name: bdev_io_367326 00:05:19.742 size: 50.003479 MiB name: msgpool_367326 00:05:19.742 size: 36.509338 MiB name: fsdev_io_367326 00:05:19.742 size: 21.763794 MiB name: PDU_Pool 00:05:19.742 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.742 size: 4.133484 MiB name: evtpool_367326 00:05:19.742 size: 0.026123 MiB name: Session_Pool 00:05:19.742 end mempools------- 00:05:19.742 6 memzones totaling size 4.142822 MiB 00:05:19.742 size: 1.000366 MiB name: RG_ring_0_367326 00:05:19.742 size: 1.000366 MiB name: RG_ring_1_367326 00:05:19.742 size: 1.000366 MiB name: RG_ring_4_367326 00:05:19.742 size: 1.000366 MiB name: RG_ring_5_367326 00:05:19.742 size: 0.125366 MiB name: RG_ring_2_367326 00:05:19.742 size: 0.015991 MiB name: RG_ring_3_367326 00:05:19.742 end memzones------- 00:05:19.742 15:15:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.742 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:19.742 list of free elements. size: 10.862488 MiB 00:05:19.743 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:19.743 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:19.743 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:19.743 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:19.743 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:19.743 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:19.743 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:19.743 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:19.743 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:19.743 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:19.743 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:19.743 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:19.743 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:19.743 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:19.743 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:19.743 list of standard malloc elements. size: 199.218628 MiB 00:05:19.743 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:19.743 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:19.743 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:19.743 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:19.743 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:19.743 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:19.743 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:19.743 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:19.743 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:19.743 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:19.743 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:19.743 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:19.743 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:19.743 list of memzone associated elements. size: 599.918884 MiB 00:05:19.743 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:19.743 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.743 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:19.743 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.743 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:19.743 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_367326_0 00:05:19.743 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:19.743 associated memzone info: size: 48.002930 MiB name: MP_msgpool_367326_0 00:05:19.743 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:19.743 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_367326_0 00:05:19.743 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:19.743 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.743 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:19.743 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.743 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:19.743 associated memzone info: size: 3.000122 MiB name: MP_evtpool_367326_0 00:05:19.743 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:19.743 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_367326 00:05:19.743 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:19.743 associated memzone info: size: 1.007996 MiB name: MP_evtpool_367326 00:05:19.743 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:19.743 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.743 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:19.743 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.743 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:19.743 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.743 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:19.743 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.743 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:19.743 associated memzone info: size: 1.000366 MiB name: RG_ring_0_367326 00:05:19.743 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:19.743 associated memzone info: size: 1.000366 MiB name: RG_ring_1_367326 00:05:19.743 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:19.743 associated memzone info: size: 1.000366 MiB name: RG_ring_4_367326 00:05:19.743 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:19.743 associated memzone info: size: 1.000366 MiB name: RG_ring_5_367326 00:05:19.743 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:19.743 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_367326 00:05:19.743 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:19.743 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_367326 00:05:19.743 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:19.743 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.743 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:19.743 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.743 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:19.743 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.743 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:19.743 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_367326 00:05:19.743 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:19.743 associated memzone info: size: 0.125366 MiB name: RG_ring_2_367326 00:05:19.743 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:19.743 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.743 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:19.743 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.743 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:19.743 associated memzone info: size: 0.015991 MiB name: RG_ring_3_367326 00:05:19.743 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:19.743 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.743 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:19.743 associated memzone info: size: 0.000183 MiB name: MP_msgpool_367326 00:05:19.743 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:19.743 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_367326 00:05:19.743 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:19.743 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_367326 00:05:19.743 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:19.743 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.743 15:15:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.743 15:15:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 367326 00:05:19.743 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 367326 ']' 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 367326 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367326 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367326' 00:05:19.744 killing process with pid 367326 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 367326 00:05:19.744 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 367326 00:05:20.005 00:05:20.005 real 0m1.403s 00:05:20.005 user 0m1.465s 00:05:20.005 sys 0m0.431s 00:05:20.005 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.005 15:15:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.005 ************************************ 00:05:20.005 END TEST dpdk_mem_utility 00:05:20.005 ************************************ 00:05:20.005 15:15:08 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.005 15:15:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.005 15:15:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.005 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:05:20.005 ************************************ 00:05:20.005 START TEST event 00:05:20.005 ************************************ 00:05:20.005 15:15:08 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.005 * Looking for test storage... 00:05:20.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.005 15:15:08 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.005 15:15:08 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.005 15:15:08 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.266 15:15:09 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.266 15:15:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.266 15:15:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.266 15:15:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.266 15:15:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.266 15:15:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.266 15:15:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.266 15:15:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.266 15:15:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.266 15:15:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.266 15:15:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.266 15:15:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.266 15:15:09 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.266 15:15:09 event -- scripts/common.sh@345 -- # : 1 00:05:20.266 15:15:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.266 15:15:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.266 15:15:09 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.266 15:15:09 event -- scripts/common.sh@353 -- # local d=1 00:05:20.266 15:15:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.266 15:15:09 event -- scripts/common.sh@355 -- # echo 1 00:05:20.266 15:15:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.266 15:15:09 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.266 15:15:09 event -- scripts/common.sh@353 -- # local d=2 00:05:20.266 15:15:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.266 15:15:09 event -- scripts/common.sh@355 -- # echo 2 00:05:20.266 15:15:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.266 15:15:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.266 15:15:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.266 15:15:09 event -- scripts/common.sh@368 -- # return 0 00:05:20.266 15:15:09 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.266 15:15:09 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.267 --rc genhtml_branch_coverage=1 00:05:20.267 --rc genhtml_function_coverage=1 00:05:20.267 --rc genhtml_legend=1 00:05:20.267 --rc geninfo_all_blocks=1 00:05:20.267 --rc geninfo_unexecuted_blocks=1 00:05:20.267 00:05:20.267 ' 00:05:20.267 15:15:09 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.267 --rc genhtml_branch_coverage=1 00:05:20.267 --rc genhtml_function_coverage=1 00:05:20.267 --rc genhtml_legend=1 00:05:20.267 --rc geninfo_all_blocks=1 00:05:20.267 --rc geninfo_unexecuted_blocks=1 00:05:20.267 00:05:20.267 ' 00:05:20.267 15:15:09 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.267 --rc genhtml_branch_coverage=1 00:05:20.267 --rc genhtml_function_coverage=1 00:05:20.267 --rc genhtml_legend=1 00:05:20.267 --rc geninfo_all_blocks=1 00:05:20.267 --rc geninfo_unexecuted_blocks=1 00:05:20.267 00:05:20.267 ' 00:05:20.267 15:15:09 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.267 --rc genhtml_branch_coverage=1 00:05:20.267 --rc genhtml_function_coverage=1 00:05:20.267 --rc genhtml_legend=1 00:05:20.267 --rc geninfo_all_blocks=1 00:05:20.267 --rc geninfo_unexecuted_blocks=1 00:05:20.267 00:05:20.267 ' 00:05:20.267 15:15:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.267 15:15:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.267 15:15:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.267 15:15:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:20.267 15:15:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.267 15:15:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.267 ************************************ 00:05:20.267 START TEST event_perf 00:05:20.267 ************************************ 00:05:20.267 15:15:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.267 Running I/O for 1 seconds...[2024-11-20 15:15:09.100196] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:20.267 [2024-11-20 15:15:09.100302] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367963 ] 00:05:20.267 [2024-11-20 15:15:09.191254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.527 [2024-11-20 15:15:09.236012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.527 [2024-11-20 15:15:09.236189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.527 [2024-11-20 15:15:09.236321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.527 [2024-11-20 15:15:09.236486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.468 Running I/O for 1 seconds... 00:05:21.468 lcore 0: 175947 00:05:21.468 lcore 1: 175949 00:05:21.468 lcore 2: 175946 00:05:21.468 lcore 3: 175947 00:05:21.468 done. 00:05:21.468 00:05:21.468 real 0m1.188s 00:05:21.468 user 0m4.080s 00:05:21.468 sys 0m0.102s 00:05:21.468 15:15:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.468 15:15:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.468 ************************************ 00:05:21.468 END TEST event_perf 00:05:21.468 ************************************ 00:05:21.468 15:15:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.468 15:15:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.468 15:15:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.468 15:15:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.468 ************************************ 00:05:21.468 START TEST event_reactor 00:05:21.468 ************************************ 00:05:21.468 15:15:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.468 [2024-11-20 15:15:10.363694] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:21.468 [2024-11-20 15:15:10.363799] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368509 ] 00:05:21.729 [2024-11-20 15:15:10.450920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.729 [2024-11-20 15:15:10.490894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.672 test_start 00:05:22.672 oneshot 00:05:22.672 tick 100 00:05:22.672 tick 100 00:05:22.672 tick 250 00:05:22.672 tick 100 00:05:22.672 tick 100 00:05:22.672 tick 250 00:05:22.672 tick 100 00:05:22.672 tick 500 00:05:22.672 tick 100 00:05:22.672 tick 100 00:05:22.672 tick 250 00:05:22.672 tick 100 00:05:22.672 tick 100 00:05:22.672 test_end 00:05:22.672 00:05:22.672 real 0m1.176s 00:05:22.672 user 0m1.092s 00:05:22.672 sys 0m0.078s 00:05:22.672 15:15:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.672 15:15:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 ************************************ 00:05:22.672 END TEST event_reactor 00:05:22.672 ************************************ 00:05:22.672 15:15:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.672 15:15:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.672 15:15:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.672 15:15:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 ************************************ 00:05:22.672 START TEST event_reactor_perf 00:05:22.672 ************************************ 00:05:22.672 15:15:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.672 [2024-11-20 15:15:11.618547] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:22.672 [2024-11-20 15:15:11.618653] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368894 ] 00:05:22.932 [2024-11-20 15:15:11.705310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.932 [2024-11-20 15:15:11.740529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.874 test_start 00:05:23.874 test_end 00:05:23.874 Performance: 535594 events per second 00:05:23.874 00:05:23.874 real 0m1.170s 00:05:23.874 user 0m1.091s 00:05:23.874 sys 0m0.076s 00:05:23.874 15:15:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.874 15:15:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.874 ************************************ 00:05:23.874 END TEST event_reactor_perf 00:05:23.874 ************************************ 00:05:23.874 15:15:12 event -- event/event.sh@49 -- # uname -s 00:05:23.874 15:15:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.874 15:15:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.874 15:15:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.874 15:15:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.874 15:15:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 ************************************ 00:05:24.135 START TEST event_scheduler 00:05:24.135 ************************************ 00:05:24.135 15:15:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.135 * Looking for test storage... 00:05:24.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.135 15:15:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.135 15:15:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.135 15:15:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.135 15:15:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.135 --rc genhtml_branch_coverage=1 00:05:24.135 --rc genhtml_function_coverage=1 00:05:24.135 --rc genhtml_legend=1 00:05:24.135 --rc geninfo_all_blocks=1 00:05:24.135 --rc geninfo_unexecuted_blocks=1 00:05:24.135 00:05:24.135 ' 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.135 --rc genhtml_branch_coverage=1 00:05:24.135 --rc genhtml_function_coverage=1 00:05:24.135 --rc genhtml_legend=1 00:05:24.135 --rc geninfo_all_blocks=1 00:05:24.135 --rc geninfo_unexecuted_blocks=1 00:05:24.135 00:05:24.135 ' 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.135 --rc genhtml_branch_coverage=1 00:05:24.135 --rc genhtml_function_coverage=1 00:05:24.135 --rc genhtml_legend=1 00:05:24.135 --rc geninfo_all_blocks=1 00:05:24.135 --rc geninfo_unexecuted_blocks=1 00:05:24.135 00:05:24.135 ' 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.135 --rc genhtml_branch_coverage=1 00:05:24.135 --rc genhtml_function_coverage=1 00:05:24.135 --rc genhtml_legend=1 00:05:24.135 --rc geninfo_all_blocks=1 00:05:24.135 --rc geninfo_unexecuted_blocks=1 00:05:24.135 00:05:24.135 ' 00:05:24.135 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.135 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=369157 00:05:24.135 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.135 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 369157 00:05:24.135 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 369157 ']' 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.135 15:15:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.397 [2024-11-20 15:15:13.106331] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:24.397 [2024-11-20 15:15:13.106408] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369157 ] 00:05:24.397 [2024-11-20 15:15:13.200210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.397 [2024-11-20 15:15:13.256596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.397 [2024-11-20 15:15:13.256758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.397 [2024-11-20 15:15:13.256922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.397 [2024-11-20 15:15:13.256922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.968 15:15:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.968 15:15:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:24.968 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.968 15:15:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.968 15:15:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 [2024-11-20 15:15:13.931218] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:25.230 [2024-11-20 15:15:13.931237] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.230 [2024-11-20 15:15:13.931248] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.230 [2024-11-20 15:15:13.931253] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.230 [2024-11-20 15:15:13.931259] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 [2024-11-20 15:15:13.998075] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.230 15:15:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 ************************************ 00:05:25.230 START TEST scheduler_create_thread 00:05:25.230 ************************************ 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 2 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 3 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 4 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 5 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 6 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 7 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 8 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.230 9 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.230 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.802 10 00:05:25.802 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.803 15:15:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.803 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.803 15:15:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 15:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.188 15:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:27.188 15:15:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:27.188 15:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.188 15:15:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.131 15:15:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.131 15:15:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.131 15:15:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.131 15:15:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 15:15:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.708 15:15:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:28.708 15:15:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:28.708 15:15:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.708 15:15:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.653 15:15:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.653 00:05:29.653 real 0m4.226s 00:05:29.653 user 0m0.027s 00:05:29.653 sys 0m0.005s 00:05:29.653 15:15:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.653 15:15:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.653 ************************************ 00:05:29.653 END TEST scheduler_create_thread 00:05:29.653 ************************************ 00:05:29.653 15:15:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:29.653 15:15:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 369157 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 369157 ']' 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 369157 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369157 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369157' 00:05:29.653 killing process with pid 369157 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 369157 00:05:29.653 15:15:18 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 369157 00:05:29.653 [2024-11-20 15:15:18.543705] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.913 00:05:29.913 real 0m5.849s 00:05:29.913 user 0m12.905s 00:05:29.913 sys 0m0.437s 00:05:29.913 15:15:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.913 15:15:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.913 ************************************ 00:05:29.913 END TEST event_scheduler 00:05:29.913 ************************************ 00:05:29.913 15:15:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.913 15:15:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.913 15:15:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.913 15:15:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.913 15:15:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.913 ************************************ 00:05:29.913 START TEST app_repeat 00:05:29.914 ************************************ 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=370358 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 370358' 00:05:29.914 Process app_repeat pid: 370358 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.914 spdk_app_start Round 0 00:05:29.914 15:15:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 370358 /var/tmp/spdk-nbd.sock 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 370358 ']' 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.914 15:15:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.914 [2024-11-20 15:15:18.825267] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:29.914 [2024-11-20 15:15:18.825371] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370358 ] 00:05:30.176 [2024-11-20 15:15:18.918311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.176 [2024-11-20 15:15:18.949192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.176 [2024-11-20 15:15:18.949204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.176 15:15:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.176 15:15:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.176 15:15:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.438 Malloc0 00:05:30.438 15:15:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.438 Malloc1 00:05:30.700 15:15:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.700 /dev/nbd0 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.700 15:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.700 1+0 records in 00:05:30.700 1+0 records out 00:05:30.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271418 s, 15.1 MB/s 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.700 15:15:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.962 /dev/nbd1 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.962 1+0 records in 00:05:30.962 1+0 records out 00:05:30.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273142 s, 15.0 MB/s 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.962 15:15:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.962 15:15:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.223 { 00:05:31.223 "nbd_device": "/dev/nbd0", 00:05:31.223 "bdev_name": "Malloc0" 00:05:31.223 }, 00:05:31.223 { 00:05:31.223 "nbd_device": "/dev/nbd1", 00:05:31.223 "bdev_name": "Malloc1" 00:05:31.223 } 00:05:31.223 ]' 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.223 { 00:05:31.223 "nbd_device": "/dev/nbd0", 00:05:31.223 "bdev_name": "Malloc0" 00:05:31.223 }, 00:05:31.223 { 00:05:31.223 "nbd_device": "/dev/nbd1", 00:05:31.223 "bdev_name": "Malloc1" 00:05:31.223 } 00:05:31.223 ]' 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.223 /dev/nbd1' 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.223 /dev/nbd1' 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.223 15:15:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.224 256+0 records in 00:05:31.224 256+0 records out 00:05:31.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012721 s, 82.4 MB/s 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.224 256+0 records in 00:05:31.224 256+0 records out 00:05:31.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120747 s, 86.8 MB/s 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.224 15:15:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.484 256+0 records in 00:05:31.484 256+0 records out 00:05:31.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126956 s, 82.6 MB/s 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.484 15:15:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.745 15:15:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.006 15:15:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.006 15:15:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.268 15:15:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.268 [2024-11-20 15:15:21.093830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.268 [2024-11-20 15:15:21.124580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.268 [2024-11-20 15:15:21.124580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.268 [2024-11-20 15:15:21.153899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.268 [2024-11-20 15:15:21.153927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.569 15:15:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.569 15:15:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.569 spdk_app_start Round 1 00:05:35.569 15:15:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 370358 /var/tmp/spdk-nbd.sock 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 370358 ']' 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.569 15:15:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.569 15:15:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.569 Malloc0 00:05:35.569 15:15:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.830 Malloc1 00:05:35.830 15:15:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.830 15:15:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.831 15:15:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.831 /dev/nbd0 00:05:36.092 15:15:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.092 15:15:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.092 1+0 records in 00:05:36.092 1+0 records out 00:05:36.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274052 s, 14.9 MB/s 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.092 15:15:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.092 15:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.092 15:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.092 15:15:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.092 /dev/nbd1 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.092 1+0 records in 00:05:36.092 1+0 records out 00:05:36.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294263 s, 13.9 MB/s 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.092 15:15:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.092 15:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.353 { 00:05:36.353 "nbd_device": "/dev/nbd0", 00:05:36.353 "bdev_name": "Malloc0" 00:05:36.353 }, 00:05:36.353 { 00:05:36.353 "nbd_device": "/dev/nbd1", 00:05:36.353 "bdev_name": "Malloc1" 00:05:36.353 } 00:05:36.353 ]' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.353 { 00:05:36.353 "nbd_device": "/dev/nbd0", 00:05:36.353 "bdev_name": "Malloc0" 00:05:36.353 }, 00:05:36.353 { 00:05:36.353 "nbd_device": "/dev/nbd1", 00:05:36.353 "bdev_name": "Malloc1" 00:05:36.353 } 00:05:36.353 ]' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.353 /dev/nbd1' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.353 /dev/nbd1' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.353 256+0 records in 00:05:36.353 256+0 records out 00:05:36.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127134 s, 82.5 MB/s 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.353 15:15:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.353 256+0 records in 00:05:36.353 256+0 records out 00:05:36.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122271 s, 85.8 MB/s 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.614 256+0 records in 00:05:36.614 256+0 records out 00:05:36.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126642 s, 82.8 MB/s 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.614 15:15:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.615 15:15:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.876 15:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.168 15:15:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.168 15:15:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.429 15:15:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.429 [2024-11-20 15:15:26.239956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.430 [2024-11-20 15:15:26.270192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.430 [2024-11-20 15:15:26.270219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.430 [2024-11-20 15:15:26.299826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.430 [2024-11-20 15:15:26.299855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.729 15:15:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.729 15:15:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.729 spdk_app_start Round 2 00:05:40.729 15:15:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 370358 /var/tmp/spdk-nbd.sock 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 370358 ']' 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.729 15:15:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.729 15:15:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.729 Malloc0 00:05:40.729 15:15:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.990 Malloc1 00:05:40.990 15:15:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.990 /dev/nbd0 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.990 15:15:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.990 15:15:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.251 1+0 records in 00:05:41.251 1+0 records out 00:05:41.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287781 s, 14.2 MB/s 00:05:41.251 15:15:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.251 15:15:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.251 15:15:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.251 15:15:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.251 15:15:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.251 15:15:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.251 15:15:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.251 15:15:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.251 /dev/nbd1 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.251 1+0 records in 00:05:41.251 1+0 records out 00:05:41.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274767 s, 14.9 MB/s 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.251 15:15:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.251 15:15:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.512 { 00:05:41.512 "nbd_device": "/dev/nbd0", 00:05:41.512 "bdev_name": "Malloc0" 00:05:41.512 }, 00:05:41.512 { 00:05:41.512 "nbd_device": "/dev/nbd1", 00:05:41.512 "bdev_name": "Malloc1" 00:05:41.512 } 00:05:41.512 ]' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.512 { 00:05:41.512 "nbd_device": "/dev/nbd0", 00:05:41.512 "bdev_name": "Malloc0" 00:05:41.512 }, 00:05:41.512 { 00:05:41.512 "nbd_device": "/dev/nbd1", 00:05:41.512 "bdev_name": "Malloc1" 00:05:41.512 } 00:05:41.512 ]' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.512 /dev/nbd1' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.512 /dev/nbd1' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.512 256+0 records in 00:05:41.512 256+0 records out 00:05:41.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127253 s, 82.4 MB/s 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.512 256+0 records in 00:05:41.512 256+0 records out 00:05:41.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126155 s, 83.1 MB/s 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.512 15:15:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.773 256+0 records in 00:05:41.773 256+0 records out 00:05:41.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131363 s, 79.8 MB/s 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.774 15:15:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.035 15:15:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.297 15:15:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.297 15:15:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.558 15:15:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.558 [2024-11-20 15:15:31.392287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.558 [2024-11-20 15:15:31.422469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.558 [2024-11-20 15:15:31.422470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.558 [2024-11-20 15:15:31.451666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.558 [2024-11-20 15:15:31.451701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.858 15:15:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 370358 /var/tmp/spdk-nbd.sock 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 370358 ']' 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.858 15:15:34 event.app_repeat -- event/event.sh@39 -- # killprocess 370358 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 370358 ']' 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 370358 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370358 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370358' 00:05:45.858 killing process with pid 370358 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 370358 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 370358 00:05:45.858 spdk_app_start is called in Round 0. 00:05:45.858 Shutdown signal received, stop current app iteration 00:05:45.858 Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 reinitialization... 00:05:45.858 spdk_app_start is called in Round 1. 00:05:45.858 Shutdown signal received, stop current app iteration 00:05:45.858 Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 reinitialization... 00:05:45.858 spdk_app_start is called in Round 2. 00:05:45.858 Shutdown signal received, stop current app iteration 00:05:45.858 Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 reinitialization... 00:05:45.858 spdk_app_start is called in Round 3. 00:05:45.858 Shutdown signal received, stop current app iteration 00:05:45.858 15:15:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.858 15:15:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.858 00:05:45.858 real 0m15.871s 00:05:45.858 user 0m34.839s 00:05:45.858 sys 0m2.302s 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.858 15:15:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.858 ************************************ 00:05:45.858 END TEST app_repeat 00:05:45.858 ************************************ 00:05:45.858 15:15:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.858 15:15:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.858 15:15:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.858 15:15:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.858 15:15:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.858 ************************************ 00:05:45.858 START TEST cpu_locks 00:05:45.858 ************************************ 00:05:45.858 15:15:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.120 * Looking for test storage... 00:05:46.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.120 15:15:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.120 15:15:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.121 15:15:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.121 --rc genhtml_branch_coverage=1 00:05:46.121 --rc genhtml_function_coverage=1 00:05:46.121 --rc genhtml_legend=1 00:05:46.121 --rc geninfo_all_blocks=1 00:05:46.121 --rc geninfo_unexecuted_blocks=1 00:05:46.121 00:05:46.121 ' 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.121 --rc genhtml_branch_coverage=1 00:05:46.121 --rc genhtml_function_coverage=1 00:05:46.121 --rc genhtml_legend=1 00:05:46.121 --rc geninfo_all_blocks=1 00:05:46.121 --rc geninfo_unexecuted_blocks=1 00:05:46.121 00:05:46.121 ' 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.121 --rc genhtml_branch_coverage=1 00:05:46.121 --rc genhtml_function_coverage=1 00:05:46.121 --rc genhtml_legend=1 00:05:46.121 --rc geninfo_all_blocks=1 00:05:46.121 --rc geninfo_unexecuted_blocks=1 00:05:46.121 00:05:46.121 ' 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.121 --rc genhtml_branch_coverage=1 00:05:46.121 --rc genhtml_function_coverage=1 00:05:46.121 --rc genhtml_legend=1 00:05:46.121 --rc geninfo_all_blocks=1 00:05:46.121 --rc geninfo_unexecuted_blocks=1 00:05:46.121 00:05:46.121 ' 00:05:46.121 15:15:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.121 15:15:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.121 15:15:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.121 15:15:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.121 15:15:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.121 ************************************ 00:05:46.121 START TEST default_locks 00:05:46.121 ************************************ 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=373773 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 373773 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 373773 ']' 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.121 15:15:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.121 [2024-11-20 15:15:35.036380] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:46.121 [2024-11-20 15:15:35.036445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373773 ] 00:05:46.381 [2024-11-20 15:15:35.123390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.381 [2024-11-20 15:15:35.163075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.951 15:15:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.951 15:15:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:46.951 15:15:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 373773 00:05:46.951 15:15:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 373773 00:05:46.951 15:15:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.522 lslocks: write error 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 373773 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 373773 ']' 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 373773 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373773 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373773' 00:05:47.522 killing process with pid 373773 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 373773 00:05:47.522 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 373773 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 373773 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 373773 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 373773 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 373773 ']' 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (373773) - No such process 00:05:47.784 ERROR: process (pid: 373773) is no longer running 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.784 15:15:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.785 15:15:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.785 15:15:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.785 00:05:47.785 real 0m1.603s 00:05:47.785 user 0m1.731s 00:05:47.785 sys 0m0.567s 00:05:47.785 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.785 15:15:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 ************************************ 00:05:47.785 END TEST default_locks 00:05:47.785 ************************************ 00:05:47.785 15:15:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.785 15:15:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.785 15:15:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.785 15:15:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 ************************************ 00:05:47.785 START TEST default_locks_via_rpc 00:05:47.785 ************************************ 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=374122 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 374122 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 374122 ']' 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.785 15:15:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 [2024-11-20 15:15:36.714087] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:47.785 [2024-11-20 15:15:36.714136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374122 ] 00:05:48.045 [2024-11-20 15:15:36.797464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.045 [2024-11-20 15:15:36.828055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 374122 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 374122 00:05:48.617 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 374122 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 374122 ']' 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 374122 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374122 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374122' 00:05:48.878 killing process with pid 374122 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 374122 00:05:48.878 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 374122 00:05:49.139 00:05:49.139 real 0m1.323s 00:05:49.139 user 0m1.436s 00:05:49.139 sys 0m0.440s 00:05:49.139 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.139 15:15:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.139 ************************************ 00:05:49.139 END TEST default_locks_via_rpc 00:05:49.139 ************************************ 00:05:49.139 15:15:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.139 15:15:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.139 15:15:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.139 15:15:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.139 ************************************ 00:05:49.139 START TEST non_locking_app_on_locked_coremask 00:05:49.139 ************************************ 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=374361 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 374361 /var/tmp/spdk.sock 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 374361 ']' 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.139 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.399 [2024-11-20 15:15:38.107629] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:49.400 [2024-11-20 15:15:38.107672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374361 ] 00:05:49.400 [2024-11-20 15:15:38.158230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.400 [2024-11-20 15:15:38.188443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=374495 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 374495 /var/tmp/spdk2.sock 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 374495 ']' 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.660 15:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.660 [2024-11-20 15:15:38.426640] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:49.660 [2024-11-20 15:15:38.426695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374495 ] 00:05:49.660 [2024-11-20 15:15:38.515098] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.660 [2024-11-20 15:15:38.515121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.660 [2024-11-20 15:15:38.573519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.602 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.602 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.602 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 374361 00:05:50.602 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.602 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 374361 00:05:50.863 lslocks: write error 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 374361 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 374361 ']' 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 374361 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374361 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374361' 00:05:50.863 killing process with pid 374361 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 374361 00:05:50.863 15:15:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 374361 00:05:51.124 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 374495 00:05:51.124 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 374495 ']' 00:05:51.125 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 374495 00:05:51.125 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.125 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.125 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374495 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374495' 00:05:51.385 killing process with pid 374495 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 374495 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 374495 00:05:51.385 00:05:51.385 real 0m2.236s 00:05:51.385 user 0m2.482s 00:05:51.385 sys 0m0.770s 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.385 15:15:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.385 ************************************ 00:05:51.385 END TEST non_locking_app_on_locked_coremask 00:05:51.385 ************************************ 00:05:51.385 15:15:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.385 15:15:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.385 15:15:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.385 15:15:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.646 ************************************ 00:05:51.646 START TEST locking_app_on_unlocked_coremask 00:05:51.646 ************************************ 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=375023 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 375023 /var/tmp/spdk.sock 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 375023 ']' 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.646 15:15:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.646 [2024-11-20 15:15:40.427427] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:51.646 [2024-11-20 15:15:40.427488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375023 ] 00:05:51.646 [2024-11-20 15:15:40.514950] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.646 [2024-11-20 15:15:40.514985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.646 [2024-11-20 15:15:40.555813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.587 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=375075 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 375075 /var/tmp/spdk2.sock 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 375075 ']' 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.588 15:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.588 [2024-11-20 15:15:41.278766] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:52.588 [2024-11-20 15:15:41.278820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375075 ] 00:05:52.588 [2024-11-20 15:15:41.368213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.588 [2024-11-20 15:15:41.430597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.159 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.159 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.159 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 375075 00:05:53.159 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.159 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 375075 00:05:53.730 lslocks: write error 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 375023 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 375023 ']' 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 375023 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375023 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375023' 00:05:53.731 killing process with pid 375023 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 375023 00:05:53.731 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 375023 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 375075 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 375075 ']' 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 375075 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375075 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375075' 00:05:53.991 killing process with pid 375075 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 375075 00:05:53.991 15:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 375075 00:05:54.253 00:05:54.253 real 0m2.754s 00:05:54.253 user 0m3.102s 00:05:54.253 sys 0m0.817s 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.253 ************************************ 00:05:54.253 END TEST locking_app_on_unlocked_coremask 00:05:54.253 ************************************ 00:05:54.253 15:15:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.253 15:15:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.253 15:15:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.253 15:15:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.253 ************************************ 00:05:54.253 START TEST locking_app_on_locked_coremask 00:05:54.253 ************************************ 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=375466 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 375466 /var/tmp/spdk.sock 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 375466 ']' 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.253 15:15:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.523 [2024-11-20 15:15:43.258031] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:54.523 [2024-11-20 15:15:43.258087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375466 ] 00:05:54.523 [2024-11-20 15:15:43.344442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.523 [2024-11-20 15:15:43.376069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.093 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=375779 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 375779 /var/tmp/spdk2.sock 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 375779 /var/tmp/spdk2.sock 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 375779 /var/tmp/spdk2.sock 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 375779 ']' 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.094 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.354 [2024-11-20 15:15:44.103193] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:55.354 [2024-11-20 15:15:44.103248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375779 ] 00:05:55.354 [2024-11-20 15:15:44.189709] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 375466 has claimed it. 00:05:55.354 [2024-11-20 15:15:44.189740] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (375779) - No such process 00:05:55.924 ERROR: process (pid: 375779) is no longer running 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 375466 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 375466 00:05:55.924 15:15:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.495 lslocks: write error 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 375466 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 375466 ']' 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 375466 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375466 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375466' 00:05:56.495 killing process with pid 375466 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 375466 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 375466 00:05:56.495 00:05:56.495 real 0m2.210s 00:05:56.495 user 0m2.501s 00:05:56.495 sys 0m0.607s 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.495 15:15:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.495 ************************************ 00:05:56.495 END TEST locking_app_on_locked_coremask 00:05:56.495 ************************************ 00:05:56.495 15:15:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.495 15:15:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.495 15:15:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.495 15:15:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.756 ************************************ 00:05:56.757 START TEST locking_overlapped_coremask 00:05:56.757 ************************************ 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=376133 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 376133 /var/tmp/spdk.sock 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 376133 ']' 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.757 15:15:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.757 [2024-11-20 15:15:45.544296] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:56.757 [2024-11-20 15:15:45.544346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376133 ] 00:05:56.757 [2024-11-20 15:15:45.627805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.757 [2024-11-20 15:15:45.659578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.757 [2024-11-20 15:15:45.659726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.757 [2024-11-20 15:15:45.659728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=376161 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 376161 /var/tmp/spdk2.sock 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 376161 /var/tmp/spdk2.sock 00:05:57.698 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 376161 /var/tmp/spdk2.sock 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 376161 ']' 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.699 15:15:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.699 [2024-11-20 15:15:46.400325] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:57.699 [2024-11-20 15:15:46.400379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376161 ] 00:05:57.699 [2024-11-20 15:15:46.512663] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 376133 has claimed it. 00:05:57.699 [2024-11-20 15:15:46.512706] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (376161) - No such process 00:05:58.271 ERROR: process (pid: 376161) is no longer running 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 376133 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 376133 ']' 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 376133 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.271 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.272 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376133 00:05:58.272 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.272 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.272 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376133' 00:05:58.272 killing process with pid 376133 00:05:58.272 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 376133 00:05:58.272 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 376133 00:05:58.533 00:05:58.533 real 0m1.780s 00:05:58.533 user 0m5.170s 00:05:58.533 sys 0m0.390s 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.533 ************************************ 00:05:58.533 END TEST locking_overlapped_coremask 00:05:58.533 ************************************ 00:05:58.533 15:15:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.533 15:15:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.533 15:15:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.533 15:15:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.533 ************************************ 00:05:58.533 START TEST locking_overlapped_coremask_via_rpc 00:05:58.533 ************************************ 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=376521 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 376521 /var/tmp/spdk.sock 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 376521 ']' 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.533 15:15:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.533 [2024-11-20 15:15:47.414972] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:58.533 [2024-11-20 15:15:47.415029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376521 ] 00:05:58.794 [2024-11-20 15:15:47.499461] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.794 [2024-11-20 15:15:47.499486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.794 [2024-11-20 15:15:47.535280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.794 [2024-11-20 15:15:47.535433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.794 [2024-11-20 15:15:47.535434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=376538 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 376538 /var/tmp/spdk2.sock 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 376538 ']' 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.365 15:15:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.365 [2024-11-20 15:15:48.262334] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:05:59.365 [2024-11-20 15:15:48.262384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376538 ] 00:05:59.626 [2024-11-20 15:15:48.375604] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.626 [2024-11-20 15:15:48.375633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.626 [2024-11-20 15:15:48.453414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.626 [2024-11-20 15:15:48.453571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.626 [2024-11-20 15:15:48.453573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.198 [2024-11-20 15:15:49.062242] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 376521 has claimed it. 00:06:00.198 request: 00:06:00.198 { 00:06:00.198 "method": "framework_enable_cpumask_locks", 00:06:00.198 "req_id": 1 00:06:00.198 } 00:06:00.198 Got JSON-RPC error response 00:06:00.198 response: 00:06:00.198 { 00:06:00.198 "code": -32603, 00:06:00.198 "message": "Failed to claim CPU core: 2" 00:06:00.198 } 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 376521 /var/tmp/spdk.sock 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 376521 ']' 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.198 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 376538 /var/tmp/spdk2.sock 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 376538 ']' 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.463 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.726 00:06:00.726 real 0m2.090s 00:06:00.726 user 0m0.857s 00:06:00.726 sys 0m0.159s 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.726 15:15:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.726 ************************************ 00:06:00.726 END TEST locking_overlapped_coremask_via_rpc 00:06:00.726 ************************************ 00:06:00.726 15:15:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:00.726 15:15:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 376521 ]] 00:06:00.726 15:15:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 376521 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 376521 ']' 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 376521 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376521 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376521' 00:06:00.726 killing process with pid 376521 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 376521 00:06:00.726 15:15:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 376521 00:06:00.987 15:15:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 376538 ]] 00:06:00.987 15:15:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 376538 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 376538 ']' 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 376538 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376538 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376538' 00:06:00.987 killing process with pid 376538 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 376538 00:06:00.987 15:15:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 376538 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 376521 ]] 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 376521 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 376521 ']' 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 376521 00:06:01.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (376521) - No such process 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 376521 is not found' 00:06:01.248 Process with pid 376521 is not found 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 376538 ]] 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 376538 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 376538 ']' 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 376538 00:06:01.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (376538) - No such process 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 376538 is not found' 00:06:01.248 Process with pid 376538 is not found 00:06:01.248 15:15:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.248 00:06:01.248 real 0m15.254s 00:06:01.248 user 0m27.228s 00:06:01.248 sys 0m4.713s 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.248 15:15:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.248 ************************************ 00:06:01.248 END TEST cpu_locks 00:06:01.248 ************************************ 00:06:01.248 00:06:01.248 real 0m41.194s 00:06:01.248 user 1m21.524s 00:06:01.248 sys 0m8.142s 00:06:01.248 15:15:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.248 15:15:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.248 ************************************ 00:06:01.248 END TEST event 00:06:01.248 ************************************ 00:06:01.248 15:15:50 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:01.248 15:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.248 15:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.248 15:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.248 ************************************ 00:06:01.248 START TEST thread 00:06:01.248 ************************************ 00:06:01.248 15:15:50 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:01.248 * Looking for test storage... 00:06:01.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.537 15:15:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.537 15:15:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.537 15:15:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.537 15:15:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.537 15:15:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.537 15:15:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.537 15:15:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.537 15:15:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.537 15:15:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.537 15:15:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.537 15:15:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.537 15:15:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:01.537 15:15:50 thread -- scripts/common.sh@345 -- # : 1 00:06:01.537 15:15:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.537 15:15:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.537 15:15:50 thread -- scripts/common.sh@365 -- # decimal 1 00:06:01.537 15:15:50 thread -- scripts/common.sh@353 -- # local d=1 00:06:01.537 15:15:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.537 15:15:50 thread -- scripts/common.sh@355 -- # echo 1 00:06:01.537 15:15:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.537 15:15:50 thread -- scripts/common.sh@366 -- # decimal 2 00:06:01.537 15:15:50 thread -- scripts/common.sh@353 -- # local d=2 00:06:01.537 15:15:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.537 15:15:50 thread -- scripts/common.sh@355 -- # echo 2 00:06:01.537 15:15:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.537 15:15:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.537 15:15:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.537 15:15:50 thread -- scripts/common.sh@368 -- # return 0 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.537 --rc genhtml_branch_coverage=1 00:06:01.537 --rc genhtml_function_coverage=1 00:06:01.537 --rc genhtml_legend=1 00:06:01.537 --rc geninfo_all_blocks=1 00:06:01.537 --rc geninfo_unexecuted_blocks=1 00:06:01.537 00:06:01.537 ' 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.537 --rc genhtml_branch_coverage=1 00:06:01.537 --rc genhtml_function_coverage=1 00:06:01.537 --rc genhtml_legend=1 00:06:01.537 --rc geninfo_all_blocks=1 00:06:01.537 --rc geninfo_unexecuted_blocks=1 00:06:01.537 00:06:01.537 ' 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.537 --rc genhtml_branch_coverage=1 00:06:01.537 --rc genhtml_function_coverage=1 00:06:01.537 --rc genhtml_legend=1 00:06:01.537 --rc geninfo_all_blocks=1 00:06:01.537 --rc geninfo_unexecuted_blocks=1 00:06:01.537 00:06:01.537 ' 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.537 --rc genhtml_branch_coverage=1 00:06:01.537 --rc genhtml_function_coverage=1 00:06:01.537 --rc genhtml_legend=1 00:06:01.537 --rc geninfo_all_blocks=1 00:06:01.537 --rc geninfo_unexecuted_blocks=1 00:06:01.537 00:06:01.537 ' 00:06:01.537 15:15:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.537 15:15:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.537 ************************************ 00:06:01.537 START TEST thread_poller_perf 00:06:01.537 ************************************ 00:06:01.537 15:15:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.537 [2024-11-20 15:15:50.373449] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:06:01.537 [2024-11-20 15:15:50.373555] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377174 ] 00:06:01.537 [2024-11-20 15:15:50.471593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.882 [2024-11-20 15:15:50.512723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.882 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.822 [2024-11-20T14:15:51.783Z] ====================================== 00:06:02.823 [2024-11-20T14:15:51.783Z] busy:2405245062 (cyc) 00:06:02.823 [2024-11-20T14:15:51.783Z] total_run_count: 418000 00:06:02.823 [2024-11-20T14:15:51.783Z] tsc_hz: 2400000000 (cyc) 00:06:02.823 [2024-11-20T14:15:51.783Z] ====================================== 00:06:02.823 [2024-11-20T14:15:51.783Z] poller_cost: 5754 (cyc), 2397 (nsec) 00:06:02.823 00:06:02.823 real 0m1.194s 00:06:02.823 user 0m1.095s 00:06:02.823 sys 0m0.094s 00:06:02.823 15:15:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.823 15:15:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.823 ************************************ 00:06:02.823 END TEST thread_poller_perf 00:06:02.823 ************************************ 00:06:02.823 15:15:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.823 15:15:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.823 15:15:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.823 15:15:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.823 ************************************ 00:06:02.823 START TEST thread_poller_perf 00:06:02.823 ************************************ 00:06:02.823 15:15:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.823 [2024-11-20 15:15:51.647401] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:06:02.823 [2024-11-20 15:15:51.647505] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377344 ] 00:06:02.823 [2024-11-20 15:15:51.706637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.823 [2024-11-20 15:15:51.740017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.823 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.207 [2024-11-20T14:15:53.167Z] ====================================== 00:06:04.207 [2024-11-20T14:15:53.167Z] busy:2401488632 (cyc) 00:06:04.207 [2024-11-20T14:15:53.167Z] total_run_count: 5564000 00:06:04.207 [2024-11-20T14:15:53.167Z] tsc_hz: 2400000000 (cyc) 00:06:04.207 [2024-11-20T14:15:53.167Z] ====================================== 00:06:04.207 [2024-11-20T14:15:53.167Z] poller_cost: 431 (cyc), 179 (nsec) 00:06:04.207 00:06:04.207 real 0m1.141s 00:06:04.207 user 0m1.089s 00:06:04.207 sys 0m0.049s 00:06:04.207 15:15:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.207 15:15:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.207 ************************************ 00:06:04.207 END TEST thread_poller_perf 00:06:04.207 ************************************ 00:06:04.207 15:15:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.207 00:06:04.207 real 0m2.701s 00:06:04.207 user 0m2.354s 00:06:04.207 sys 0m0.359s 00:06:04.207 15:15:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.207 15:15:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.207 ************************************ 00:06:04.207 END TEST thread 00:06:04.207 ************************************ 00:06:04.207 15:15:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:04.207 15:15:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.207 15:15:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.207 15:15:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.207 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:04.207 ************************************ 00:06:04.207 START TEST app_cmdline 00:06:04.207 ************************************ 00:06:04.207 15:15:52 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.207 * Looking for test storage... 00:06:04.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:04.207 15:15:52 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.207 15:15:52 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.207 15:15:52 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.207 15:15:53 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.207 15:15:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.208 15:15:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.208 --rc genhtml_branch_coverage=1 00:06:04.208 --rc genhtml_function_coverage=1 00:06:04.208 --rc genhtml_legend=1 00:06:04.208 --rc geninfo_all_blocks=1 00:06:04.208 --rc geninfo_unexecuted_blocks=1 00:06:04.208 00:06:04.208 ' 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.208 --rc genhtml_branch_coverage=1 00:06:04.208 --rc genhtml_function_coverage=1 00:06:04.208 --rc genhtml_legend=1 00:06:04.208 --rc geninfo_all_blocks=1 00:06:04.208 --rc geninfo_unexecuted_blocks=1 00:06:04.208 00:06:04.208 ' 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.208 --rc genhtml_branch_coverage=1 00:06:04.208 --rc genhtml_function_coverage=1 00:06:04.208 --rc genhtml_legend=1 00:06:04.208 --rc geninfo_all_blocks=1 00:06:04.208 --rc geninfo_unexecuted_blocks=1 00:06:04.208 00:06:04.208 ' 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.208 --rc genhtml_branch_coverage=1 00:06:04.208 --rc genhtml_function_coverage=1 00:06:04.208 --rc genhtml_legend=1 00:06:04.208 --rc geninfo_all_blocks=1 00:06:04.208 --rc geninfo_unexecuted_blocks=1 00:06:04.208 00:06:04.208 ' 00:06:04.208 15:15:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.208 15:15:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=377741 00:06:04.208 15:15:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 377741 00:06:04.208 15:15:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 377741 ']' 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.208 15:15:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.208 [2024-11-20 15:15:53.133945] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:06:04.208 [2024-11-20 15:15:53.133995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377741 ] 00:06:04.469 [2024-11-20 15:15:53.218665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.469 [2024-11-20 15:15:53.250209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.040 15:15:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.040 15:15:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:05.040 15:15:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:05.301 { 00:06:05.301 "version": "SPDK v25.01-pre git sha1 32c3f377c", 00:06:05.301 "fields": { 00:06:05.301 "major": 25, 00:06:05.301 "minor": 1, 00:06:05.301 "patch": 0, 00:06:05.301 "suffix": "-pre", 00:06:05.301 "commit": "32c3f377c" 00:06:05.301 } 00:06:05.301 } 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.301 15:15:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:05.301 15:15:54 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.562 request: 00:06:05.562 { 00:06:05.562 "method": "env_dpdk_get_mem_stats", 00:06:05.562 "req_id": 1 00:06:05.562 } 00:06:05.562 Got JSON-RPC error response 00:06:05.562 response: 00:06:05.562 { 00:06:05.562 "code": -32601, 00:06:05.562 "message": "Method not found" 00:06:05.562 } 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.562 15:15:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 377741 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 377741 ']' 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 377741 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 377741 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 377741' 00:06:05.562 killing process with pid 377741 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@973 -- # kill 377741 00:06:05.562 15:15:54 app_cmdline -- common/autotest_common.sh@978 -- # wait 377741 00:06:05.823 00:06:05.823 real 0m1.719s 00:06:05.823 user 0m2.054s 00:06:05.823 sys 0m0.467s 00:06:05.823 15:15:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.823 15:15:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.823 ************************************ 00:06:05.823 END TEST app_cmdline 00:06:05.823 ************************************ 00:06:05.823 15:15:54 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:05.823 15:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.823 15:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.823 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.823 ************************************ 00:06:05.823 START TEST version 00:06:05.823 ************************************ 00:06:05.823 15:15:54 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:05.823 * Looking for test storage... 00:06:05.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:05.824 15:15:54 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.084 15:15:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.084 15:15:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.084 15:15:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.084 15:15:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.084 15:15:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.084 15:15:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.084 15:15:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.084 15:15:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.084 15:15:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.084 15:15:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.084 15:15:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.084 15:15:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:06.084 15:15:54 version -- scripts/common.sh@345 -- # : 1 00:06:06.084 15:15:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.084 15:15:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.084 15:15:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:06.084 15:15:54 version -- scripts/common.sh@353 -- # local d=1 00:06:06.084 15:15:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.084 15:15:54 version -- scripts/common.sh@355 -- # echo 1 00:06:06.084 15:15:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.084 15:15:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:06.084 15:15:54 version -- scripts/common.sh@353 -- # local d=2 00:06:06.084 15:15:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.084 15:15:54 version -- scripts/common.sh@355 -- # echo 2 00:06:06.084 15:15:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.084 15:15:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.084 15:15:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.084 15:15:54 version -- scripts/common.sh@368 -- # return 0 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.084 --rc genhtml_branch_coverage=1 00:06:06.084 --rc genhtml_function_coverage=1 00:06:06.084 --rc genhtml_legend=1 00:06:06.084 --rc geninfo_all_blocks=1 00:06:06.084 --rc geninfo_unexecuted_blocks=1 00:06:06.084 00:06:06.084 ' 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.084 --rc genhtml_branch_coverage=1 00:06:06.084 --rc genhtml_function_coverage=1 00:06:06.084 --rc genhtml_legend=1 00:06:06.084 --rc geninfo_all_blocks=1 00:06:06.084 --rc geninfo_unexecuted_blocks=1 00:06:06.084 00:06:06.084 ' 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.084 --rc genhtml_branch_coverage=1 00:06:06.084 --rc genhtml_function_coverage=1 00:06:06.084 --rc genhtml_legend=1 00:06:06.084 --rc geninfo_all_blocks=1 00:06:06.084 --rc geninfo_unexecuted_blocks=1 00:06:06.084 00:06:06.084 ' 00:06:06.084 15:15:54 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.084 --rc genhtml_branch_coverage=1 00:06:06.084 --rc genhtml_function_coverage=1 00:06:06.084 --rc genhtml_legend=1 00:06:06.084 --rc geninfo_all_blocks=1 00:06:06.084 --rc geninfo_unexecuted_blocks=1 00:06:06.084 00:06:06.084 ' 00:06:06.084 15:15:54 version -- app/version.sh@17 -- # get_header_version major 00:06:06.084 15:15:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.084 15:15:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.085 15:15:54 version -- app/version.sh@17 -- # major=25 00:06:06.085 15:15:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.085 15:15:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.085 15:15:54 version -- app/version.sh@18 -- # minor=1 00:06:06.085 15:15:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.085 15:15:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.085 15:15:54 version -- app/version.sh@19 -- # patch=0 00:06:06.085 15:15:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.085 15:15:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.085 15:15:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.085 15:15:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.085 15:15:54 version -- app/version.sh@22 -- # version=25.1 00:06:06.085 15:15:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.085 15:15:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:06.085 15:15:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:06.085 15:15:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.085 15:15:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:06.085 15:15:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:06.085 00:06:06.085 real 0m0.285s 00:06:06.085 user 0m0.170s 00:06:06.085 sys 0m0.165s 00:06:06.085 15:15:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.085 15:15:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.085 ************************************ 00:06:06.085 END TEST version 00:06:06.085 ************************************ 00:06:06.085 15:15:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:06.085 15:15:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:06.085 15:15:55 -- spdk/autotest.sh@194 -- # uname -s 00:06:06.085 15:15:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:06.085 15:15:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.085 15:15:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.085 15:15:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:06.085 15:15:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:06.085 15:15:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:06.085 15:15:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.085 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:06:06.347 15:15:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:06.347 15:15:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:06.347 15:15:55 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:06.347 15:15:55 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:06.347 15:15:55 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:06.347 15:15:55 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:06.347 15:15:55 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.347 15:15:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.347 15:15:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.347 15:15:55 -- common/autotest_common.sh@10 -- # set +x 00:06:06.347 ************************************ 00:06:06.347 START TEST nvmf_tcp 00:06:06.347 ************************************ 00:06:06.347 15:15:55 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.347 * Looking for test storage... 00:06:06.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:06.347 15:15:55 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.347 15:15:55 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.347 15:15:55 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.347 15:15:55 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.347 15:15:55 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.348 15:15:55 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.348 --rc genhtml_branch_coverage=1 00:06:06.348 --rc genhtml_function_coverage=1 00:06:06.348 --rc genhtml_legend=1 00:06:06.348 --rc geninfo_all_blocks=1 00:06:06.348 --rc geninfo_unexecuted_blocks=1 00:06:06.348 00:06:06.348 ' 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.348 --rc genhtml_branch_coverage=1 00:06:06.348 --rc genhtml_function_coverage=1 00:06:06.348 --rc genhtml_legend=1 00:06:06.348 --rc geninfo_all_blocks=1 00:06:06.348 --rc geninfo_unexecuted_blocks=1 00:06:06.348 00:06:06.348 ' 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.348 --rc genhtml_branch_coverage=1 00:06:06.348 --rc genhtml_function_coverage=1 00:06:06.348 --rc genhtml_legend=1 00:06:06.348 --rc geninfo_all_blocks=1 00:06:06.348 --rc geninfo_unexecuted_blocks=1 00:06:06.348 00:06:06.348 ' 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.348 --rc genhtml_branch_coverage=1 00:06:06.348 --rc genhtml_function_coverage=1 00:06:06.348 --rc genhtml_legend=1 00:06:06.348 --rc geninfo_all_blocks=1 00:06:06.348 --rc geninfo_unexecuted_blocks=1 00:06:06.348 00:06:06.348 ' 00:06:06.348 15:15:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:06.348 15:15:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:06.348 15:15:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.348 15:15:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.610 ************************************ 00:06:06.610 START TEST nvmf_target_core 00:06:06.610 ************************************ 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:06.610 * Looking for test storage... 00:06:06.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.610 --rc genhtml_branch_coverage=1 00:06:06.610 --rc genhtml_function_coverage=1 00:06:06.610 --rc genhtml_legend=1 00:06:06.610 --rc geninfo_all_blocks=1 00:06:06.610 --rc geninfo_unexecuted_blocks=1 00:06:06.610 00:06:06.610 ' 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.610 --rc genhtml_branch_coverage=1 00:06:06.610 --rc genhtml_function_coverage=1 00:06:06.610 --rc genhtml_legend=1 00:06:06.610 --rc geninfo_all_blocks=1 00:06:06.610 --rc geninfo_unexecuted_blocks=1 00:06:06.610 00:06:06.610 ' 00:06:06.610 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.610 --rc genhtml_branch_coverage=1 00:06:06.610 --rc genhtml_function_coverage=1 00:06:06.610 --rc genhtml_legend=1 00:06:06.610 --rc geninfo_all_blocks=1 00:06:06.610 --rc geninfo_unexecuted_blocks=1 00:06:06.610 00:06:06.611 ' 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.611 --rc genhtml_branch_coverage=1 00:06:06.611 --rc genhtml_function_coverage=1 00:06:06.611 --rc genhtml_legend=1 00:06:06.611 --rc geninfo_all_blocks=1 00:06:06.611 --rc geninfo_unexecuted_blocks=1 00:06:06.611 00:06:06.611 ' 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.611 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.873 ************************************ 00:06:06.873 START TEST nvmf_abort 00:06:06.873 ************************************ 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:06.873 * Looking for test storage... 00:06:06.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.873 --rc genhtml_branch_coverage=1 00:06:06.873 --rc genhtml_function_coverage=1 00:06:06.873 --rc genhtml_legend=1 00:06:06.873 --rc geninfo_all_blocks=1 00:06:06.873 --rc geninfo_unexecuted_blocks=1 00:06:06.873 00:06:06.873 ' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.873 --rc genhtml_branch_coverage=1 00:06:06.873 --rc genhtml_function_coverage=1 00:06:06.873 --rc genhtml_legend=1 00:06:06.873 --rc geninfo_all_blocks=1 00:06:06.873 --rc geninfo_unexecuted_blocks=1 00:06:06.873 00:06:06.873 ' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.873 --rc genhtml_branch_coverage=1 00:06:06.873 --rc genhtml_function_coverage=1 00:06:06.873 --rc genhtml_legend=1 00:06:06.873 --rc geninfo_all_blocks=1 00:06:06.873 --rc geninfo_unexecuted_blocks=1 00:06:06.873 00:06:06.873 ' 00:06:06.873 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.873 --rc genhtml_branch_coverage=1 00:06:06.873 --rc genhtml_function_coverage=1 00:06:06.874 --rc genhtml_legend=1 00:06:06.874 --rc geninfo_all_blocks=1 00:06:06.874 --rc geninfo_unexecuted_blocks=1 00:06:06.874 00:06:06.874 ' 00:06:06.874 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.135 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.136 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:15.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:15.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:15.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:15.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.281 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:06:15.282 00:06:15.282 --- 10.0.0.2 ping statistics --- 00:06:15.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.282 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:06:15.282 00:06:15.282 --- 10.0.0.1 ping statistics --- 00:06:15.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.282 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=382224 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 382224 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 382224 ']' 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.282 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.282 [2024-11-20 15:16:03.442335] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:06:15.282 [2024-11-20 15:16:03.442403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.282 [2024-11-20 15:16:03.545099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.282 [2024-11-20 15:16:03.600998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.282 [2024-11-20 15:16:03.601053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.282 [2024-11-20 15:16:03.601062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.282 [2024-11-20 15:16:03.601068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.282 [2024-11-20 15:16:03.601075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.282 [2024-11-20 15:16:03.602918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.282 [2024-11-20 15:16:03.603080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.282 [2024-11-20 15:16:03.603080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.544 [2024-11-20 15:16:04.320332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.544 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.545 Malloc0 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.545 Delay0 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.545 [2024-11-20 15:16:04.410139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.545 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:15.807 [2024-11-20 15:16:04.562361] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:18.352 Initializing NVMe Controllers 00:06:18.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:18.352 controller IO queue size 128 less than required 00:06:18.352 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:18.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:18.352 Initialization complete. Launching workers. 00:06:18.352 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28461 00:06:18.352 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28522, failed to submit 62 00:06:18.352 success 28465, unsuccessful 57, failed 0 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:18.352 rmmod nvme_tcp 00:06:18.352 rmmod nvme_fabrics 00:06:18.352 rmmod nvme_keyring 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 382224 ']' 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 382224 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 382224 ']' 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 382224 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382224 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382224' 00:06:18.352 killing process with pid 382224 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 382224 00:06:18.352 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 382224 00:06:18.352 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:18.352 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:18.352 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:18.352 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:18.352 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:18.352 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:18.353 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:18.353 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:18.353 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:18.353 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.353 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.353 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:20.266 00:06:20.266 real 0m13.485s 00:06:20.266 user 0m14.316s 00:06:20.266 sys 0m6.692s 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.266 ************************************ 00:06:20.266 END TEST nvmf_abort 00:06:20.266 ************************************ 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.266 ************************************ 00:06:20.266 START TEST nvmf_ns_hotplug_stress 00:06:20.266 ************************************ 00:06:20.266 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.528 * Looking for test storage... 00:06:20.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.528 --rc genhtml_branch_coverage=1 00:06:20.528 --rc genhtml_function_coverage=1 00:06:20.528 --rc genhtml_legend=1 00:06:20.528 --rc geninfo_all_blocks=1 00:06:20.528 --rc geninfo_unexecuted_blocks=1 00:06:20.528 00:06:20.528 ' 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.528 --rc genhtml_branch_coverage=1 00:06:20.528 --rc genhtml_function_coverage=1 00:06:20.528 --rc genhtml_legend=1 00:06:20.528 --rc geninfo_all_blocks=1 00:06:20.528 --rc geninfo_unexecuted_blocks=1 00:06:20.528 00:06:20.528 ' 00:06:20.528 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.528 --rc genhtml_branch_coverage=1 00:06:20.528 --rc genhtml_function_coverage=1 00:06:20.528 --rc genhtml_legend=1 00:06:20.528 --rc geninfo_all_blocks=1 00:06:20.528 --rc geninfo_unexecuted_blocks=1 00:06:20.529 00:06:20.529 ' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.529 --rc genhtml_branch_coverage=1 00:06:20.529 --rc genhtml_function_coverage=1 00:06:20.529 --rc genhtml_legend=1 00:06:20.529 --rc geninfo_all_blocks=1 00:06:20.529 --rc geninfo_unexecuted_blocks=1 00:06:20.529 00:06:20.529 ' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.529 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:28.674 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:28.674 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:28.674 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.674 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:28.675 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:28.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:06:28.675 00:06:28.675 --- 10.0.0.2 ping statistics --- 00:06:28.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.675 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:06:28.675 00:06:28.675 --- 10.0.0.1 ping statistics --- 00:06:28.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.675 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:28.675 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=387272 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 387272 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 387272 ']' 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.675 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:28.675 [2024-11-20 15:16:17.066876] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:06:28.675 [2024-11-20 15:16:17.066938] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.675 [2024-11-20 15:16:17.168375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.675 [2024-11-20 15:16:17.219291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.675 [2024-11-20 15:16:17.219341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.675 [2024-11-20 15:16:17.219351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.675 [2024-11-20 15:16:17.219358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.675 [2024-11-20 15:16:17.219364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.675 [2024-11-20 15:16:17.221140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.675 [2024-11-20 15:16:17.221303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.675 [2024-11-20 15:16:17.221303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:29.247 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:29.247 [2024-11-20 15:16:18.109043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.247 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:29.508 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.770 [2024-11-20 15:16:18.516116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.770 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:30.030 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:30.030 Malloc0 00:06:30.030 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:30.291 Delay0 00:06:30.291 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.552 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:30.812 NULL1 00:06:30.812 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:30.812 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=387699 00:06:30.812 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:30.812 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:30.812 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.073 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.334 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:31.334 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:31.594 true 00:06:31.594 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:31.594 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.594 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.855 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:31.855 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:32.115 true 00:06:32.115 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:32.115 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.056 Read completed with error (sct=0, sc=11) 00:06:33.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.317 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.317 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:33.317 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:33.577 true 00:06:33.577 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:33.577 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.520 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.520 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:34.520 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:34.780 true 00:06:34.780 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:34.780 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.041 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.041 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:35.041 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:35.302 true 00:06:35.302 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:35.302 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.686 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:36.686 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:36.946 true 00:06:36.946 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:36.946 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.888 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.888 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:37.888 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:38.149 true 00:06:38.149 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:38.149 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.149 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.409 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:38.409 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:38.670 true 00:06:38.670 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:38.670 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.670 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.931 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:38.931 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:39.191 true 00:06:39.191 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:39.191 15:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.191 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.456 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:39.456 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:39.717 true 00:06:39.717 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:39.717 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.658 15:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.918 15:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:40.918 15:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:41.178 true 00:06:41.178 15:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:41.178 15:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.119 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.119 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:42.119 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:42.380 true 00:06:42.380 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:42.380 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.380 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.640 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:42.640 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:42.900 true 00:06:42.900 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:42.900 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.283 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:44.283 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:44.283 true 00:06:44.283 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:44.283 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.224 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.484 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:45.484 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:45.484 true 00:06:45.484 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:45.484 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.745 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.005 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:46.005 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:46.005 true 00:06:46.005 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:46.005 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.267 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.527 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:46.527 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:46.527 true 00:06:46.527 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:46.527 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.787 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.047 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:47.047 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:47.047 true 00:06:47.047 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:47.047 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.306 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.567 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:47.567 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:47.567 true 00:06:47.567 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:47.567 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.828 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.089 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:48.089 15:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:48.089 true 00:06:48.089 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:48.089 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.349 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.609 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:48.609 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:48.609 true 00:06:48.609 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:48.609 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.868 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.127 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:49.127 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:49.127 true 00:06:49.127 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:49.127 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.387 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.647 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:49.647 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:49.647 true 00:06:49.647 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:49.647 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.932 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.192 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:50.192 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:50.192 true 00:06:50.192 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:50.192 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.450 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.709 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:50.709 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:50.709 true 00:06:50.709 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:50.709 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.969 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.230 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:51.230 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:51.230 true 00:06:51.230 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:51.230 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.490 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.750 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:51.750 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:51.750 true 00:06:51.750 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:51.750 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.009 15:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.269 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:52.269 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:52.269 true 00:06:52.269 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:52.269 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.529 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.789 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:52.789 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:52.789 true 00:06:53.138 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:53.138 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.138 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.438 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:53.438 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:53.438 true 00:06:53.438 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:53.438 15:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.823 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:54.823 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:54.823 true 00:06:55.083 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:55.083 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.653 15:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.914 15:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:55.914 15:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:56.173 true 00:06:56.173 15:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:56.173 15:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.435 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.435 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:56.435 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:56.695 true 00:06:56.696 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:56.696 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.080 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.080 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:58.080 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:58.080 true 00:06:58.080 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:58.080 15:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.021 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.281 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:59.281 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:59.281 true 00:06:59.281 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:59.281 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.540 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.800 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:59.800 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:59.800 true 00:06:59.800 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:06:59.800 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 15:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.184 15:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:01.184 15:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:01.445 true 00:07:01.445 15:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:07:01.445 15:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.388 Initializing NVMe Controllers 00:07:02.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.388 Controller IO queue size 128, less than required. 00:07:02.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.388 Controller IO queue size 128, less than required. 00:07:02.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:02.388 Initialization complete. Launching workers. 00:07:02.388 ======================================================== 00:07:02.388 Latency(us) 00:07:02.388 Device Information : IOPS MiB/s Average min max 00:07:02.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1829.07 0.89 36883.38 1252.61 1048425.51 00:07:02.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15123.56 7.38 8431.54 1140.82 400722.54 00:07:02.388 ======================================================== 00:07:02.388 Total : 16952.62 8.28 11501.29 1140.82 1048425.51 00:07:02.388 00:07:02.388 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.388 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:02.388 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:02.649 true 00:07:02.649 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 387699 00:07:02.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (387699) - No such process 00:07:02.649 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 387699 00:07:02.649 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.649 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.910 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:02.910 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:02.910 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:02.910 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.910 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:03.170 null0 00:07:03.170 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.170 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.170 15:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:03.170 null1 00:07:03.170 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.170 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.170 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:03.431 null2 00:07:03.431 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.431 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.431 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:03.691 null3 00:07:03.691 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.691 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.691 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:03.691 null4 00:07:03.691 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.691 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.691 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:03.953 null5 00:07:03.953 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.953 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.953 15:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:04.213 null6 00:07:04.213 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.213 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.213 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:04.213 null7 00:07:04.474 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.474 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.474 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:04.474 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.474 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 394484 394485 394487 394489 394492 394494 394496 394498 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.475 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.736 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.736 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.736 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.737 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.997 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.258 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.258 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.519 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.780 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.042 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.303 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.303 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.565 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.565 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.565 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.565 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.565 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.565 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.566 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.828 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.089 15:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.089 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.353 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.354 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.616 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.877 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.139 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.399 rmmod nvme_tcp 00:07:08.399 rmmod nvme_fabrics 00:07:08.399 rmmod nvme_keyring 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 387272 ']' 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 387272 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 387272 ']' 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 387272 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.399 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387272 00:07:08.660 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.660 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387272' 00:07:08.661 killing process with pid 387272 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 387272 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 387272 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.661 15:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.206 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.206 00:07:11.206 real 0m50.388s 00:07:11.206 user 3m17.358s 00:07:11.206 sys 0m16.349s 00:07:11.206 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.206 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.206 ************************************ 00:07:11.207 END TEST nvmf_ns_hotplug_stress 00:07:11.207 ************************************ 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 ************************************ 00:07:11.207 START TEST nvmf_delete_subsystem 00:07:11.207 ************************************ 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:11.207 * Looking for test storage... 00:07:11.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.207 --rc genhtml_branch_coverage=1 00:07:11.207 --rc genhtml_function_coverage=1 00:07:11.207 --rc genhtml_legend=1 00:07:11.207 --rc geninfo_all_blocks=1 00:07:11.207 --rc geninfo_unexecuted_blocks=1 00:07:11.207 00:07:11.207 ' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.207 --rc genhtml_branch_coverage=1 00:07:11.207 --rc genhtml_function_coverage=1 00:07:11.207 --rc genhtml_legend=1 00:07:11.207 --rc geninfo_all_blocks=1 00:07:11.207 --rc geninfo_unexecuted_blocks=1 00:07:11.207 00:07:11.207 ' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.207 --rc genhtml_branch_coverage=1 00:07:11.207 --rc genhtml_function_coverage=1 00:07:11.207 --rc genhtml_legend=1 00:07:11.207 --rc geninfo_all_blocks=1 00:07:11.207 --rc geninfo_unexecuted_blocks=1 00:07:11.207 00:07:11.207 ' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.207 --rc genhtml_branch_coverage=1 00:07:11.207 --rc genhtml_function_coverage=1 00:07:11.207 --rc genhtml_legend=1 00:07:11.207 --rc geninfo_all_blocks=1 00:07:11.207 --rc geninfo_unexecuted_blocks=1 00:07:11.207 00:07:11.207 ' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.208 15:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:19.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:19.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:19.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:19.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.348 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:07:19.349 00:07:19.349 --- 10.0.0.2 ping statistics --- 00:07:19.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.349 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:07:19.349 00:07:19.349 --- 10.0.0.1 ping statistics --- 00:07:19.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.349 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=399749 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 399749 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 399749 ']' 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.349 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.349 [2024-11-20 15:17:07.438444] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:07:19.349 [2024-11-20 15:17:07.438512] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.349 [2024-11-20 15:17:07.540184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.349 [2024-11-20 15:17:07.591771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.349 [2024-11-20 15:17:07.591822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.349 [2024-11-20 15:17:07.591831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.349 [2024-11-20 15:17:07.591838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.349 [2024-11-20 15:17:07.591845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.349 [2024-11-20 15:17:07.593497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.349 [2024-11-20 15:17:07.593501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.349 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.611 [2024-11-20 15:17:08.312523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.611 [2024-11-20 15:17:08.336846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.611 NULL1 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.611 Delay0 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=400020 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:19.611 15:17:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:19.611 [2024-11-20 15:17:08.463842] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.526 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.526 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.526 15:17:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 starting I/O failed: -6 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 [2024-11-20 15:17:10.591914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83f2c0 is same with the state(6) to be set 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Read completed with error (sct=0, sc=8) 00:07:21.787 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 starting I/O failed: -6 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 [2024-11-20 15:17:10.594681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ebc000c40 is same with the state(6) to be set 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Read completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 Write completed with error (sct=0, sc=8) 00:07:21.788 [2024-11-20 15:17:10.595224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ebc00d490 is same with the state(6) to be set 00:07:22.732 [2024-11-20 15:17:11.563633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8409a0 is same with the state(6) to be set 00:07:22.732 Write completed with error (sct=0, sc=8) 00:07:22.732 Write completed with error (sct=0, sc=8) 00:07:22.732 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 [2024-11-20 15:17:11.595138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83f4a0 is same with the state(6) to be set 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 [2024-11-20 15:17:11.595859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83f860 is same with the state(6) to be set 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 [2024-11-20 15:17:11.596919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ebc00d020 is same with the state(6) to be set 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Write completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 Read completed with error (sct=0, sc=8) 00:07:22.733 [2024-11-20 15:17:11.597693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ebc00d7c0 is same with the state(6) to be set 00:07:22.733 Initializing NVMe Controllers 00:07:22.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.733 Controller IO queue size 128, less than required. 00:07:22.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:22.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:22.733 Initialization complete. Launching workers. 00:07:22.733 ======================================================== 00:07:22.733 Latency(us) 00:07:22.733 Device Information : IOPS MiB/s Average min max 00:07:22.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.59 0.08 902785.72 366.05 1008558.64 00:07:22.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.14 0.08 973490.82 576.72 2001766.96 00:07:22.733 ======================================================== 00:07:22.733 Total : 323.73 0.16 937106.48 366.05 2001766.96 00:07:22.733 00:07:22.733 [2024-11-20 15:17:11.598417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8409a0 (9): Bad file descriptor 00:07:22.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:22.733 15:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.733 15:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:22.733 15:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 400020 00:07:22.733 15:17:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 400020 00:07:23.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (400020) - No such process 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 400020 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 400020 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 400020 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.305 [2024-11-20 15:17:12.130284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=400699 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:23.305 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.305 [2024-11-20 15:17:12.234394] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.876 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.876 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:23.876 15:17:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.447 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.447 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:24.447 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.708 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.708 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:24.708 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.279 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.279 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:25.279 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.851 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.851 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:25.851 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.422 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.422 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:26.422 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.422 Initializing NVMe Controllers 00:07:26.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.422 Controller IO queue size 128, less than required. 00:07:26.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:26.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:26.422 Initialization complete. Launching workers. 00:07:26.422 ======================================================== 00:07:26.422 Latency(us) 00:07:26.422 Device Information : IOPS MiB/s Average min max 00:07:26.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001851.44 1000171.42 1006109.27 00:07:26.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002698.55 1000165.05 1007828.05 00:07:26.422 ======================================================== 00:07:26.422 Total : 256.00 0.12 1002274.99 1000165.05 1007828.05 00:07:26.422 00:07:26.994 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.994 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 400699 00:07:26.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (400699) - No such process 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 400699 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.995 rmmod nvme_tcp 00:07:26.995 rmmod nvme_fabrics 00:07:26.995 rmmod nvme_keyring 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 399749 ']' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 399749 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 399749 ']' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 399749 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399749 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399749' 00:07:26.995 killing process with pid 399749 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 399749 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 399749 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.995 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.540 15:17:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.540 00:07:29.540 real 0m18.336s 00:07:29.540 user 0m30.799s 00:07:29.540 sys 0m6.742s 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.540 ************************************ 00:07:29.540 END TEST nvmf_delete_subsystem 00:07:29.540 ************************************ 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.540 ************************************ 00:07:29.540 START TEST nvmf_host_management 00:07:29.540 ************************************ 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:29.540 * Looking for test storage... 00:07:29.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.540 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.541 --rc genhtml_branch_coverage=1 00:07:29.541 --rc genhtml_function_coverage=1 00:07:29.541 --rc genhtml_legend=1 00:07:29.541 --rc geninfo_all_blocks=1 00:07:29.541 --rc geninfo_unexecuted_blocks=1 00:07:29.541 00:07:29.541 ' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.541 --rc genhtml_branch_coverage=1 00:07:29.541 --rc genhtml_function_coverage=1 00:07:29.541 --rc genhtml_legend=1 00:07:29.541 --rc geninfo_all_blocks=1 00:07:29.541 --rc geninfo_unexecuted_blocks=1 00:07:29.541 00:07:29.541 ' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.541 --rc genhtml_branch_coverage=1 00:07:29.541 --rc genhtml_function_coverage=1 00:07:29.541 --rc genhtml_legend=1 00:07:29.541 --rc geninfo_all_blocks=1 00:07:29.541 --rc geninfo_unexecuted_blocks=1 00:07:29.541 00:07:29.541 ' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.541 --rc genhtml_branch_coverage=1 00:07:29.541 --rc genhtml_function_coverage=1 00:07:29.541 --rc genhtml_legend=1 00:07:29.541 --rc geninfo_all_blocks=1 00:07:29.541 --rc geninfo_unexecuted_blocks=1 00:07:29.541 00:07:29.541 ' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.541 15:17:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.686 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:37.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:37.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:37.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:37.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.687 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:07:37.688 00:07:37.688 --- 10.0.0.2 ping statistics --- 00:07:37.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.688 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:37.688 00:07:37.688 --- 10.0.0.1 ping statistics --- 00:07:37.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.688 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=405720 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 405720 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 405720 ']' 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.688 15:17:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.688 [2024-11-20 15:17:25.872156] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:07:37.688 [2024-11-20 15:17:25.872232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.688 [2024-11-20 15:17:25.974591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.688 [2024-11-20 15:17:26.027700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.688 [2024-11-20 15:17:26.027757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.688 [2024-11-20 15:17:26.027765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.688 [2024-11-20 15:17:26.027772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.688 [2024-11-20 15:17:26.027778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.688 [2024-11-20 15:17:26.029792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.688 [2024-11-20 15:17:26.029952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.688 [2024-11-20 15:17:26.030113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.688 [2024-11-20 15:17:26.030113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.950 [2024-11-20 15:17:26.751103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.950 Malloc0 00:07:37.950 [2024-11-20 15:17:26.832941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=406065 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 406065 /var/tmp/bdevperf.sock 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 406065 ']' 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.950 { 00:07:37.950 "params": { 00:07:37.950 "name": "Nvme$subsystem", 00:07:37.950 "trtype": "$TEST_TRANSPORT", 00:07:37.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.950 "adrfam": "ipv4", 00:07:37.950 "trsvcid": "$NVMF_PORT", 00:07:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.950 "hdgst": ${hdgst:-false}, 00:07:37.950 "ddgst": ${ddgst:-false} 00:07:37.950 }, 00:07:37.950 "method": "bdev_nvme_attach_controller" 00:07:37.950 } 00:07:37.950 EOF 00:07:37.950 )") 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:37.950 15:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.950 "params": { 00:07:37.950 "name": "Nvme0", 00:07:37.950 "trtype": "tcp", 00:07:37.950 "traddr": "10.0.0.2", 00:07:37.950 "adrfam": "ipv4", 00:07:37.950 "trsvcid": "4420", 00:07:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:37.950 "hdgst": false, 00:07:37.950 "ddgst": false 00:07:37.950 }, 00:07:37.950 "method": "bdev_nvme_attach_controller" 00:07:37.950 }' 00:07:38.211 [2024-11-20 15:17:26.943353] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:07:38.211 [2024-11-20 15:17:26.943421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406065 ] 00:07:38.211 [2024-11-20 15:17:27.038278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.211 [2024-11-20 15:17:27.091578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.783 Running I/O for 10 seconds... 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.047 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.048 [2024-11-20 15:17:27.852856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.852970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.852980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.852995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.853420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14150 is same with the state(6) to be set 00:07:39.048 [2024-11-20 15:17:27.856332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.048 [2024-11-20 15:17:27.856394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.048 [2024-11-20 15:17:27.856406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.048 [2024-11-20 15:17:27.856416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.048 [2024-11-20 15:17:27.856426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.048 [2024-11-20 15:17:27.856435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.048 [2024-11-20 15:17:27.856443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.048 [2024-11-20 15:17:27.856451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.048 [2024-11-20 15:17:27.856459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1842000 is same with the state(6) to be set 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.048 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 [2024-11-20 15:17:27.863621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.863990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.863997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.049 [2024-11-20 15:17:27.864380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.049 [2024-11-20 15:17:27.864390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.864822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.050 [2024-11-20 15:17:27.864830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.050 [2024-11-20 15:17:27.866140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:39.050 task offset: 77952 on job bdev=Nvme0n1 fails 00:07:39.050 00:07:39.050 Latency(us) 00:07:39.050 [2024-11-20T14:17:28.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.050 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:39.050 Verification LBA range: start 0x0 length 0x400 00:07:39.050 Nvme0n1 : 0.40 1509.57 94.35 158.64 0.00 37132.86 1720.32 34734.08 00:07:39.050 [2024-11-20T14:17:28.010Z] =================================================================================================================== 00:07:39.050 [2024-11-20T14:17:28.010Z] Total : 1509.57 94.35 158.64 0.00 37132.86 1720.32 34734.08 00:07:39.050 [2024-11-20 15:17:27.868362] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.050 [2024-11-20 15:17:27.868403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1842000 (9): Bad file descriptor 00:07:39.050 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.050 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:39.050 [2024-11-20 15:17:27.889705] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 406065 00:07:39.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (406065) - No such process 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.992 { 00:07:39.992 "params": { 00:07:39.992 "name": "Nvme$subsystem", 00:07:39.992 "trtype": "$TEST_TRANSPORT", 00:07:39.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.992 "adrfam": "ipv4", 00:07:39.992 "trsvcid": "$NVMF_PORT", 00:07:39.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.992 "hdgst": ${hdgst:-false}, 00:07:39.992 "ddgst": ${ddgst:-false} 00:07:39.992 }, 00:07:39.992 "method": "bdev_nvme_attach_controller" 00:07:39.992 } 00:07:39.992 EOF 00:07:39.992 )") 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:39.992 15:17:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.992 "params": { 00:07:39.992 "name": "Nvme0", 00:07:39.992 "trtype": "tcp", 00:07:39.992 "traddr": "10.0.0.2", 00:07:39.992 "adrfam": "ipv4", 00:07:39.992 "trsvcid": "4420", 00:07:39.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.992 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:39.992 "hdgst": false, 00:07:39.992 "ddgst": false 00:07:39.992 }, 00:07:39.992 "method": "bdev_nvme_attach_controller" 00:07:39.992 }' 00:07:39.992 [2024-11-20 15:17:28.931419] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:07:39.992 [2024-11-20 15:17:28.931475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406447 ] 00:07:40.252 [2024-11-20 15:17:29.020111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.252 [2024-11-20 15:17:29.054654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.512 Running I/O for 1 seconds... 00:07:41.451 1993.00 IOPS, 124.56 MiB/s 00:07:41.451 Latency(us) 00:07:41.451 [2024-11-20T14:17:30.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.451 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.451 Verification LBA range: start 0x0 length 0x400 00:07:41.451 Nvme0n1 : 1.02 2023.57 126.47 0.00 0.00 30954.13 2143.57 32986.45 00:07:41.451 [2024-11-20T14:17:30.411Z] =================================================================================================================== 00:07:41.451 [2024-11-20T14:17:30.411Z] Total : 2023.57 126.47 0.00 0.00 30954.13 2143.57 32986.45 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.711 rmmod nvme_tcp 00:07:41.711 rmmod nvme_fabrics 00:07:41.711 rmmod nvme_keyring 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 405720 ']' 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 405720 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 405720 ']' 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 405720 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405720 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405720' 00:07:41.711 killing process with pid 405720 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 405720 00:07:41.711 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 405720 00:07:41.971 [2024-11-20 15:17:30.729067] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.971 15:17:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.886 15:17:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.886 15:17:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:43.886 00:07:43.886 real 0m14.757s 00:07:43.886 user 0m23.789s 00:07:43.886 sys 0m6.703s 00:07:43.886 15:17:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.886 15:17:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.886 ************************************ 00:07:43.886 END TEST nvmf_host_management 00:07:43.886 ************************************ 00:07:44.147 15:17:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:44.147 15:17:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.147 15:17:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.147 15:17:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.147 ************************************ 00:07:44.147 START TEST nvmf_lvol 00:07:44.147 ************************************ 00:07:44.147 15:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:44.147 * Looking for test storage... 00:07:44.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.147 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.147 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.147 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.409 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.410 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.690 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:52.691 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:52.691 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:52.691 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:52.691 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:07:52.691 00:07:52.691 --- 10.0.0.2 ping statistics --- 00:07:52.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.691 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:07:52.691 00:07:52.691 --- 10.0.0.1 ping statistics --- 00:07:52.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.691 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=411068 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 411068 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 411068 ']' 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.691 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.691 [2024-11-20 15:17:40.755674] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:07:52.691 [2024-11-20 15:17:40.755742] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.691 [2024-11-20 15:17:40.857965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.691 [2024-11-20 15:17:40.909840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.691 [2024-11-20 15:17:40.909897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.691 [2024-11-20 15:17:40.909906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.691 [2024-11-20 15:17:40.909914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.691 [2024-11-20 15:17:40.909920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.691 [2024-11-20 15:17:40.911768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.691 [2024-11-20 15:17:40.911926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.691 [2024-11-20 15:17:40.911927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.691 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.691 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:52.692 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.692 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.692 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.692 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.692 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:52.953 [2024-11-20 15:17:41.791933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.953 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.215 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:53.215 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.476 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:53.476 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:53.738 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:53.738 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a03cdad9-9c52-4765-b40c-eb37dff94af8 00:07:53.739 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a03cdad9-9c52-4765-b40c-eb37dff94af8 lvol 20 00:07:54.000 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8afcb70a-67d6-419f-b91c-c08b6e2f2ba7 00:07:54.000 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.261 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8afcb70a-67d6-419f-b91c-c08b6e2f2ba7 00:07:54.522 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:54.522 [2024-11-20 15:17:43.455005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.784 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.784 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=411540 00:07:54.784 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:54.784 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:56.168 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8afcb70a-67d6-419f-b91c-c08b6e2f2ba7 MY_SNAPSHOT 00:07:56.168 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1e1fe16e-9042-4e97-a518-534f010cc2c2 00:07:56.168 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8afcb70a-67d6-419f-b91c-c08b6e2f2ba7 30 00:07:56.430 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1e1fe16e-9042-4e97-a518-534f010cc2c2 MY_CLONE 00:07:56.430 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=620b100b-6f0e-4d69-aafd-625380297e15 00:07:56.430 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 620b100b-6f0e-4d69-aafd-625380297e15 00:07:57.001 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 411540 00:08:05.137 Initializing NVMe Controllers 00:08:05.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:05.137 Controller IO queue size 128, less than required. 00:08:05.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:05.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:05.137 Initialization complete. Launching workers. 00:08:05.137 ======================================================== 00:08:05.137 Latency(us) 00:08:05.137 Device Information : IOPS MiB/s Average min max 00:08:05.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16343.30 63.84 7834.87 1906.96 53656.88 00:08:05.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16234.80 63.42 7885.12 685.08 62636.59 00:08:05.137 ======================================================== 00:08:05.137 Total : 32578.10 127.26 7859.91 685.08 62636.59 00:08:05.137 00:08:05.137 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.397 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8afcb70a-67d6-419f-b91c-c08b6e2f2ba7 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a03cdad9-9c52-4765-b40c-eb37dff94af8 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.658 rmmod nvme_tcp 00:08:05.658 rmmod nvme_fabrics 00:08:05.658 rmmod nvme_keyring 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 411068 ']' 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 411068 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 411068 ']' 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 411068 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:05.658 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.659 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411068 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411068' 00:08:05.919 killing process with pid 411068 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 411068 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 411068 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.919 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.471 00:08:08.471 real 0m23.937s 00:08:08.471 user 1m4.493s 00:08:08.471 sys 0m8.780s 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.471 ************************************ 00:08:08.471 END TEST nvmf_lvol 00:08:08.471 ************************************ 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.471 ************************************ 00:08:08.471 START TEST nvmf_lvs_grow 00:08:08.471 ************************************ 00:08:08.471 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.471 * Looking for test storage... 00:08:08.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.471 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.472 --rc genhtml_branch_coverage=1 00:08:08.472 --rc genhtml_function_coverage=1 00:08:08.472 --rc genhtml_legend=1 00:08:08.472 --rc geninfo_all_blocks=1 00:08:08.472 --rc geninfo_unexecuted_blocks=1 00:08:08.472 00:08:08.472 ' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.472 --rc genhtml_branch_coverage=1 00:08:08.472 --rc genhtml_function_coverage=1 00:08:08.472 --rc genhtml_legend=1 00:08:08.472 --rc geninfo_all_blocks=1 00:08:08.472 --rc geninfo_unexecuted_blocks=1 00:08:08.472 00:08:08.472 ' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.472 --rc genhtml_branch_coverage=1 00:08:08.472 --rc genhtml_function_coverage=1 00:08:08.472 --rc genhtml_legend=1 00:08:08.472 --rc geninfo_all_blocks=1 00:08:08.472 --rc geninfo_unexecuted_blocks=1 00:08:08.472 00:08:08.472 ' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.472 --rc genhtml_branch_coverage=1 00:08:08.472 --rc genhtml_function_coverage=1 00:08:08.472 --rc genhtml_legend=1 00:08:08.472 --rc geninfo_all_blocks=1 00:08:08.472 --rc geninfo_unexecuted_blocks=1 00:08:08.472 00:08:08.472 ' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.472 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.617 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:16.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:16.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:16.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:16.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:08:16.618 00:08:16.618 --- 10.0.0.2 ping statistics --- 00:08:16.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.618 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:08:16.618 00:08:16.618 --- 10.0.0.1 ping statistics --- 00:08:16.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.618 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=418185 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 418185 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 418185 ']' 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.618 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.619 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.619 [2024-11-20 15:18:04.826453] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:08:16.619 [2024-11-20 15:18:04.826521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.619 [2024-11-20 15:18:04.926297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.619 [2024-11-20 15:18:04.979229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.619 [2024-11-20 15:18:04.979279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.619 [2024-11-20 15:18:04.979288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.619 [2024-11-20 15:18:04.979296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.619 [2024-11-20 15:18:04.979302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.619 [2024-11-20 15:18:04.979967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.880 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.142 [2024-11-20 15:18:05.854813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.142 ************************************ 00:08:17.142 START TEST lvs_grow_clean 00:08:17.142 ************************************ 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.142 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.404 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.404 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.404 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:17.404 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:17.404 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:17.665 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:17.665 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:17.665 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c lvol 150 00:08:17.927 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=72f7a4aa-1479-4be7-90ca-c536f8a8f338 00:08:17.927 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.927 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:18.189 [2024-11-20 15:18:06.895720] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:18.189 [2024-11-20 15:18:06.895794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:18.189 true 00:08:18.189 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:18.189 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:18.189 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:18.189 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.451 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 72f7a4aa-1479-4be7-90ca-c536f8a8f338 00:08:18.712 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:18.712 [2024-11-20 15:18:07.638074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.712 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=418727 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 418727 /var/tmp/bdevperf.sock 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 418727 ']' 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.973 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.974 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.974 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.974 [2024-11-20 15:18:07.894634] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:08:18.974 [2024-11-20 15:18:07.894706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418727 ] 00:08:19.236 [2024-11-20 15:18:07.987443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.236 [2024-11-20 15:18:08.039338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.811 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.811 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:19.811 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:20.385 Nvme0n1 00:08:20.385 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:20.385 [ 00:08:20.385 { 00:08:20.385 "name": "Nvme0n1", 00:08:20.385 "aliases": [ 00:08:20.385 "72f7a4aa-1479-4be7-90ca-c536f8a8f338" 00:08:20.385 ], 00:08:20.385 "product_name": "NVMe disk", 00:08:20.385 "block_size": 4096, 00:08:20.385 "num_blocks": 38912, 00:08:20.385 "uuid": "72f7a4aa-1479-4be7-90ca-c536f8a8f338", 00:08:20.385 "numa_id": 0, 00:08:20.385 "assigned_rate_limits": { 00:08:20.385 "rw_ios_per_sec": 0, 00:08:20.385 "rw_mbytes_per_sec": 0, 00:08:20.385 "r_mbytes_per_sec": 0, 00:08:20.385 "w_mbytes_per_sec": 0 00:08:20.385 }, 00:08:20.385 "claimed": false, 00:08:20.385 "zoned": false, 00:08:20.385 "supported_io_types": { 00:08:20.385 "read": true, 00:08:20.385 "write": true, 00:08:20.385 "unmap": true, 00:08:20.385 "flush": true, 00:08:20.385 "reset": true, 00:08:20.385 "nvme_admin": true, 00:08:20.385 "nvme_io": true, 00:08:20.385 "nvme_io_md": false, 00:08:20.385 "write_zeroes": true, 00:08:20.385 "zcopy": false, 00:08:20.385 "get_zone_info": false, 00:08:20.385 "zone_management": false, 00:08:20.385 "zone_append": false, 00:08:20.385 "compare": true, 00:08:20.385 "compare_and_write": true, 00:08:20.385 "abort": true, 00:08:20.385 "seek_hole": false, 00:08:20.385 "seek_data": false, 00:08:20.385 "copy": true, 00:08:20.385 "nvme_iov_md": false 00:08:20.385 }, 00:08:20.385 "memory_domains": [ 00:08:20.385 { 00:08:20.385 "dma_device_id": "system", 00:08:20.385 "dma_device_type": 1 00:08:20.385 } 00:08:20.385 ], 00:08:20.385 "driver_specific": { 00:08:20.385 "nvme": [ 00:08:20.385 { 00:08:20.385 "trid": { 00:08:20.385 "trtype": "TCP", 00:08:20.386 "adrfam": "IPv4", 00:08:20.386 "traddr": "10.0.0.2", 00:08:20.386 "trsvcid": "4420", 00:08:20.386 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:20.386 }, 00:08:20.386 "ctrlr_data": { 00:08:20.386 "cntlid": 1, 00:08:20.386 "vendor_id": "0x8086", 00:08:20.386 "model_number": "SPDK bdev Controller", 00:08:20.386 "serial_number": "SPDK0", 00:08:20.386 "firmware_revision": "25.01", 00:08:20.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.386 "oacs": { 00:08:20.386 "security": 0, 00:08:20.386 "format": 0, 00:08:20.386 "firmware": 0, 00:08:20.386 "ns_manage": 0 00:08:20.386 }, 00:08:20.386 "multi_ctrlr": true, 00:08:20.386 "ana_reporting": false 00:08:20.386 }, 00:08:20.386 "vs": { 00:08:20.386 "nvme_version": "1.3" 00:08:20.386 }, 00:08:20.386 "ns_data": { 00:08:20.386 "id": 1, 00:08:20.386 "can_share": true 00:08:20.386 } 00:08:20.386 } 00:08:20.386 ], 00:08:20.386 "mp_policy": "active_passive" 00:08:20.386 } 00:08:20.386 } 00:08:20.386 ] 00:08:20.386 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=419176 00:08:20.386 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:20.386 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.646 Running I/O for 10 seconds... 00:08:21.588 Latency(us) 00:08:21.588 [2024-11-20T14:18:10.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.588 Nvme0n1 : 1.00 24900.00 97.27 0.00 0.00 0.00 0.00 0.00 00:08:21.588 [2024-11-20T14:18:10.548Z] =================================================================================================================== 00:08:21.588 [2024-11-20T14:18:10.548Z] Total : 24900.00 97.27 0.00 0.00 0.00 0.00 0.00 00:08:21.588 00:08:22.530 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:22.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.530 Nvme0n1 : 2.00 25114.50 98.10 0.00 0.00 0.00 0.00 0.00 00:08:22.530 [2024-11-20T14:18:11.490Z] =================================================================================================================== 00:08:22.530 [2024-11-20T14:18:11.490Z] Total : 25114.50 98.10 0.00 0.00 0.00 0.00 0.00 00:08:22.530 00:08:22.530 true 00:08:22.530 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:22.530 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:22.791 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:22.791 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:22.791 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 419176 00:08:23.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.735 Nvme0n1 : 3.00 25207.33 98.47 0.00 0.00 0.00 0.00 0.00 00:08:23.735 [2024-11-20T14:18:12.695Z] =================================================================================================================== 00:08:23.735 [2024-11-20T14:18:12.695Z] Total : 25207.33 98.47 0.00 0.00 0.00 0.00 0.00 00:08:23.736 00:08:24.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.677 Nvme0n1 : 4.00 25273.50 98.72 0.00 0.00 0.00 0.00 0.00 00:08:24.677 [2024-11-20T14:18:13.637Z] =================================================================================================================== 00:08:24.677 [2024-11-20T14:18:13.637Z] Total : 25273.50 98.72 0.00 0.00 0.00 0.00 0.00 00:08:24.677 00:08:25.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.618 Nvme0n1 : 5.00 25313.20 98.88 0.00 0.00 0.00 0.00 0.00 00:08:25.618 [2024-11-20T14:18:14.578Z] =================================================================================================================== 00:08:25.618 [2024-11-20T14:18:14.578Z] Total : 25313.20 98.88 0.00 0.00 0.00 0.00 0.00 00:08:25.618 00:08:26.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.563 Nvme0n1 : 6.00 25350.33 99.02 0.00 0.00 0.00 0.00 0.00 00:08:26.563 [2024-11-20T14:18:15.523Z] =================================================================================================================== 00:08:26.563 [2024-11-20T14:18:15.523Z] Total : 25350.33 99.02 0.00 0.00 0.00 0.00 0.00 00:08:26.563 00:08:27.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.506 Nvme0n1 : 7.00 25376.71 99.13 0.00 0.00 0.00 0.00 0.00 00:08:27.506 [2024-11-20T14:18:16.466Z] =================================================================================================================== 00:08:27.506 [2024-11-20T14:18:16.466Z] Total : 25376.71 99.13 0.00 0.00 0.00 0.00 0.00 00:08:27.506 00:08:28.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.448 Nvme0n1 : 8.00 25396.50 99.21 0.00 0.00 0.00 0.00 0.00 00:08:28.448 [2024-11-20T14:18:17.408Z] =================================================================================================================== 00:08:28.448 [2024-11-20T14:18:17.408Z] Total : 25396.50 99.21 0.00 0.00 0.00 0.00 0.00 00:08:28.448 00:08:29.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.830 Nvme0n1 : 9.00 25412.11 99.27 0.00 0.00 0.00 0.00 0.00 00:08:29.830 [2024-11-20T14:18:18.790Z] =================================================================================================================== 00:08:29.830 [2024-11-20T14:18:18.790Z] Total : 25412.11 99.27 0.00 0.00 0.00 0.00 0.00 00:08:29.830 00:08:30.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.768 Nvme0n1 : 10.00 25424.40 99.31 0.00 0.00 0.00 0.00 0.00 00:08:30.768 [2024-11-20T14:18:19.728Z] =================================================================================================================== 00:08:30.768 [2024-11-20T14:18:19.728Z] Total : 25424.40 99.31 0.00 0.00 0.00 0.00 0.00 00:08:30.768 00:08:30.768 00:08:30.768 Latency(us) 00:08:30.768 [2024-11-20T14:18:19.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.768 Nvme0n1 : 10.00 25419.75 99.30 0.00 0.00 5031.95 2252.80 8956.59 00:08:30.768 [2024-11-20T14:18:19.728Z] =================================================================================================================== 00:08:30.768 [2024-11-20T14:18:19.728Z] Total : 25419.75 99.30 0.00 0.00 5031.95 2252.80 8956.59 00:08:30.768 { 00:08:30.768 "results": [ 00:08:30.768 { 00:08:30.768 "job": "Nvme0n1", 00:08:30.768 "core_mask": "0x2", 00:08:30.768 "workload": "randwrite", 00:08:30.768 "status": "finished", 00:08:30.768 "queue_depth": 128, 00:08:30.768 "io_size": 4096, 00:08:30.768 "runtime": 10.004306, 00:08:30.768 "iops": 25419.754253818304, 00:08:30.768 "mibps": 99.29591505397775, 00:08:30.768 "io_failed": 0, 00:08:30.768 "io_timeout": 0, 00:08:30.768 "avg_latency_us": 5031.950906581415, 00:08:30.768 "min_latency_us": 2252.8, 00:08:30.768 "max_latency_us": 8956.586666666666 00:08:30.768 } 00:08:30.768 ], 00:08:30.768 "core_count": 1 00:08:30.768 } 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 418727 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 418727 ']' 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 418727 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 418727 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 418727' 00:08:30.768 killing process with pid 418727 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 418727 00:08:30.768 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.768 00:08:30.768 Latency(us) 00:08:30.768 [2024-11-20T14:18:19.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.768 [2024-11-20T14:18:19.728Z] =================================================================================================================== 00:08:30.768 [2024-11-20T14:18:19.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 418727 00:08:30.768 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.029 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:31.029 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:31.029 15:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:31.289 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:31.289 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:31.289 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.550 [2024-11-20 15:18:20.296486] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:31.550 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:31.550 request: 00:08:31.550 { 00:08:31.550 "uuid": "5d7d52ff-c6a7-40e4-969d-cacb7420bc3c", 00:08:31.550 "method": "bdev_lvol_get_lvstores", 00:08:31.550 "req_id": 1 00:08:31.550 } 00:08:31.550 Got JSON-RPC error response 00:08:31.550 response: 00:08:31.550 { 00:08:31.550 "code": -19, 00:08:31.550 "message": "No such device" 00:08:31.550 } 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.811 aio_bdev 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 72f7a4aa-1479-4be7-90ca-c536f8a8f338 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=72f7a4aa-1479-4be7-90ca-c536f8a8f338 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.811 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.072 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 72f7a4aa-1479-4be7-90ca-c536f8a8f338 -t 2000 00:08:32.072 [ 00:08:32.072 { 00:08:32.072 "name": "72f7a4aa-1479-4be7-90ca-c536f8a8f338", 00:08:32.072 "aliases": [ 00:08:32.072 "lvs/lvol" 00:08:32.072 ], 00:08:32.072 "product_name": "Logical Volume", 00:08:32.072 "block_size": 4096, 00:08:32.072 "num_blocks": 38912, 00:08:32.072 "uuid": "72f7a4aa-1479-4be7-90ca-c536f8a8f338", 00:08:32.072 "assigned_rate_limits": { 00:08:32.072 "rw_ios_per_sec": 0, 00:08:32.072 "rw_mbytes_per_sec": 0, 00:08:32.072 "r_mbytes_per_sec": 0, 00:08:32.072 "w_mbytes_per_sec": 0 00:08:32.072 }, 00:08:32.072 "claimed": false, 00:08:32.072 "zoned": false, 00:08:32.072 "supported_io_types": { 00:08:32.072 "read": true, 00:08:32.072 "write": true, 00:08:32.072 "unmap": true, 00:08:32.072 "flush": false, 00:08:32.072 "reset": true, 00:08:32.072 "nvme_admin": false, 00:08:32.072 "nvme_io": false, 00:08:32.072 "nvme_io_md": false, 00:08:32.072 "write_zeroes": true, 00:08:32.072 "zcopy": false, 00:08:32.072 "get_zone_info": false, 00:08:32.072 "zone_management": false, 00:08:32.072 "zone_append": false, 00:08:32.072 "compare": false, 00:08:32.072 "compare_and_write": false, 00:08:32.072 "abort": false, 00:08:32.072 "seek_hole": true, 00:08:32.072 "seek_data": true, 00:08:32.072 "copy": false, 00:08:32.072 "nvme_iov_md": false 00:08:32.072 }, 00:08:32.072 "driver_specific": { 00:08:32.073 "lvol": { 00:08:32.073 "lvol_store_uuid": "5d7d52ff-c6a7-40e4-969d-cacb7420bc3c", 00:08:32.073 "base_bdev": "aio_bdev", 00:08:32.073 "thin_provision": false, 00:08:32.073 "num_allocated_clusters": 38, 00:08:32.073 "snapshot": false, 00:08:32.073 "clone": false, 00:08:32.073 "esnap_clone": false 00:08:32.073 } 00:08:32.073 } 00:08:32.073 } 00:08:32.073 ] 00:08:32.073 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:32.073 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:32.073 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:32.333 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:32.333 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:32.333 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:32.593 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:32.593 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 72f7a4aa-1479-4be7-90ca-c536f8a8f338 00:08:32.593 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d7d52ff-c6a7-40e4-969d-cacb7420bc3c 00:08:32.853 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.115 00:08:33.115 real 0m15.977s 00:08:33.115 user 0m15.664s 00:08:33.115 sys 0m1.407s 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 ************************************ 00:08:33.115 END TEST lvs_grow_clean 00:08:33.115 ************************************ 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 ************************************ 00:08:33.115 START TEST lvs_grow_dirty 00:08:33.115 ************************************ 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.115 15:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.376 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:33.376 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:33.637 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=eea1ffdb-608e-496d-ab02-9b3620886167 00:08:33.637 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:33.637 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.637 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.637 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.637 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eea1ffdb-608e-496d-ab02-9b3620886167 lvol 150 00:08:33.897 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:33.897 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.897 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.897 [2024-11-20 15:18:22.837739] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.897 [2024-11-20 15:18:22.837781] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.897 true 00:08:34.157 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:34.157 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:34.157 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:34.157 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.418 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:34.678 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.678 [2024-11-20 15:18:23.531739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.678 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=422530 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 422530 /var/tmp/bdevperf.sock 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 422530 ']' 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.938 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.938 [2024-11-20 15:18:23.744311] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:08:34.938 [2024-11-20 15:18:23.744360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422530 ] 00:08:34.938 [2024-11-20 15:18:23.803157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.938 [2024-11-20 15:18:23.832863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.198 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.198 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:35.198 15:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.458 Nvme0n1 00:08:35.458 15:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.458 [ 00:08:35.458 { 00:08:35.458 "name": "Nvme0n1", 00:08:35.458 "aliases": [ 00:08:35.458 "acf4cbee-41cf-49a3-85cf-f2875d22559a" 00:08:35.458 ], 00:08:35.458 "product_name": "NVMe disk", 00:08:35.458 "block_size": 4096, 00:08:35.458 "num_blocks": 38912, 00:08:35.458 "uuid": "acf4cbee-41cf-49a3-85cf-f2875d22559a", 00:08:35.458 "numa_id": 0, 00:08:35.458 "assigned_rate_limits": { 00:08:35.458 "rw_ios_per_sec": 0, 00:08:35.458 "rw_mbytes_per_sec": 0, 00:08:35.458 "r_mbytes_per_sec": 0, 00:08:35.458 "w_mbytes_per_sec": 0 00:08:35.458 }, 00:08:35.458 "claimed": false, 00:08:35.458 "zoned": false, 00:08:35.458 "supported_io_types": { 00:08:35.458 "read": true, 00:08:35.458 "write": true, 00:08:35.458 "unmap": true, 00:08:35.458 "flush": true, 00:08:35.458 "reset": true, 00:08:35.458 "nvme_admin": true, 00:08:35.458 "nvme_io": true, 00:08:35.458 "nvme_io_md": false, 00:08:35.458 "write_zeroes": true, 00:08:35.458 "zcopy": false, 00:08:35.458 "get_zone_info": false, 00:08:35.458 "zone_management": false, 00:08:35.458 "zone_append": false, 00:08:35.458 "compare": true, 00:08:35.458 "compare_and_write": true, 00:08:35.458 "abort": true, 00:08:35.458 "seek_hole": false, 00:08:35.458 "seek_data": false, 00:08:35.458 "copy": true, 00:08:35.458 "nvme_iov_md": false 00:08:35.458 }, 00:08:35.458 "memory_domains": [ 00:08:35.458 { 00:08:35.458 "dma_device_id": "system", 00:08:35.458 "dma_device_type": 1 00:08:35.458 } 00:08:35.458 ], 00:08:35.458 "driver_specific": { 00:08:35.458 "nvme": [ 00:08:35.458 { 00:08:35.458 "trid": { 00:08:35.458 "trtype": "TCP", 00:08:35.458 "adrfam": "IPv4", 00:08:35.458 "traddr": "10.0.0.2", 00:08:35.458 "trsvcid": "4420", 00:08:35.458 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:35.458 }, 00:08:35.458 "ctrlr_data": { 00:08:35.458 "cntlid": 1, 00:08:35.458 "vendor_id": "0x8086", 00:08:35.458 "model_number": "SPDK bdev Controller", 00:08:35.458 "serial_number": "SPDK0", 00:08:35.458 "firmware_revision": "25.01", 00:08:35.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.458 "oacs": { 00:08:35.458 "security": 0, 00:08:35.458 "format": 0, 00:08:35.458 "firmware": 0, 00:08:35.458 "ns_manage": 0 00:08:35.458 }, 00:08:35.458 "multi_ctrlr": true, 00:08:35.458 "ana_reporting": false 00:08:35.458 }, 00:08:35.458 "vs": { 00:08:35.458 "nvme_version": "1.3" 00:08:35.458 }, 00:08:35.458 "ns_data": { 00:08:35.458 "id": 1, 00:08:35.458 "can_share": true 00:08:35.458 } 00:08:35.458 } 00:08:35.458 ], 00:08:35.458 "mp_policy": "active_passive" 00:08:35.458 } 00:08:35.458 } 00:08:35.458 ] 00:08:35.458 15:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=422596 00:08:35.458 15:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.458 15:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.719 Running I/O for 10 seconds... 00:08:36.661 Latency(us) 00:08:36.661 [2024-11-20T14:18:25.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.661 Nvme0n1 : 1.00 25111.00 98.09 0.00 0.00 0.00 0.00 0.00 00:08:36.661 [2024-11-20T14:18:25.621Z] =================================================================================================================== 00:08:36.661 [2024-11-20T14:18:25.621Z] Total : 25111.00 98.09 0.00 0.00 0.00 0.00 0.00 00:08:36.661 00:08:37.601 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:37.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.601 Nvme0n1 : 2.00 25226.50 98.54 0.00 0.00 0.00 0.00 0.00 00:08:37.601 [2024-11-20T14:18:26.561Z] =================================================================================================================== 00:08:37.601 [2024-11-20T14:18:26.561Z] Total : 25226.50 98.54 0.00 0.00 0.00 0.00 0.00 00:08:37.601 00:08:37.601 true 00:08:37.862 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:37.862 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.862 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.862 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.862 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 422596 00:08:38.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.802 Nvme0n1 : 3.00 25286.67 98.78 0.00 0.00 0.00 0.00 0.00 00:08:38.802 [2024-11-20T14:18:27.762Z] =================================================================================================================== 00:08:38.802 [2024-11-20T14:18:27.762Z] Total : 25286.67 98.78 0.00 0.00 0.00 0.00 0.00 00:08:38.802 00:08:39.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.744 Nvme0n1 : 4.00 25332.50 98.96 0.00 0.00 0.00 0.00 0.00 00:08:39.744 [2024-11-20T14:18:28.704Z] =================================================================================================================== 00:08:39.744 [2024-11-20T14:18:28.704Z] Total : 25332.50 98.96 0.00 0.00 0.00 0.00 0.00 00:08:39.744 00:08:40.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.685 Nvme0n1 : 5.00 25360.40 99.06 0.00 0.00 0.00 0.00 0.00 00:08:40.685 [2024-11-20T14:18:29.645Z] =================================================================================================================== 00:08:40.685 [2024-11-20T14:18:29.645Z] Total : 25360.40 99.06 0.00 0.00 0.00 0.00 0.00 00:08:40.685 00:08:41.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.628 Nvme0n1 : 6.00 25378.83 99.14 0.00 0.00 0.00 0.00 0.00 00:08:41.628 [2024-11-20T14:18:30.589Z] =================================================================================================================== 00:08:41.629 [2024-11-20T14:18:30.589Z] Total : 25378.83 99.14 0.00 0.00 0.00 0.00 0.00 00:08:41.629 00:08:42.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.642 Nvme0n1 : 7.00 25401.43 99.22 0.00 0.00 0.00 0.00 0.00 00:08:42.642 [2024-11-20T14:18:31.602Z] =================================================================================================================== 00:08:42.642 [2024-11-20T14:18:31.602Z] Total : 25401.43 99.22 0.00 0.00 0.00 0.00 0.00 00:08:42.642 00:08:43.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.683 Nvme0n1 : 8.00 25410.25 99.26 0.00 0.00 0.00 0.00 0.00 00:08:43.683 [2024-11-20T14:18:32.643Z] =================================================================================================================== 00:08:43.683 [2024-11-20T14:18:32.643Z] Total : 25410.25 99.26 0.00 0.00 0.00 0.00 0.00 00:08:43.683 00:08:44.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.626 Nvme0n1 : 9.00 25424.00 99.31 0.00 0.00 0.00 0.00 0.00 00:08:44.626 [2024-11-20T14:18:33.586Z] =================================================================================================================== 00:08:44.626 [2024-11-20T14:18:33.586Z] Total : 25424.00 99.31 0.00 0.00 0.00 0.00 0.00 00:08:44.626 00:08:45.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.567 Nvme0n1 : 10.00 25435.20 99.36 0.00 0.00 0.00 0.00 0.00 00:08:45.567 [2024-11-20T14:18:34.527Z] =================================================================================================================== 00:08:45.567 [2024-11-20T14:18:34.527Z] Total : 25435.20 99.36 0.00 0.00 0.00 0.00 0.00 00:08:45.567 00:08:45.567 00:08:45.567 Latency(us) 00:08:45.567 [2024-11-20T14:18:34.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.567 Nvme0n1 : 10.00 25436.40 99.36 0.00 0.00 5029.38 2143.57 8137.39 00:08:45.567 [2024-11-20T14:18:34.527Z] =================================================================================================================== 00:08:45.567 [2024-11-20T14:18:34.527Z] Total : 25436.40 99.36 0.00 0.00 5029.38 2143.57 8137.39 00:08:45.567 { 00:08:45.567 "results": [ 00:08:45.567 { 00:08:45.567 "job": "Nvme0n1", 00:08:45.567 "core_mask": "0x2", 00:08:45.567 "workload": "randwrite", 00:08:45.567 "status": "finished", 00:08:45.567 "queue_depth": 128, 00:08:45.567 "io_size": 4096, 00:08:45.568 "runtime": 10.00456, 00:08:45.568 "iops": 25436.40100114348, 00:08:45.568 "mibps": 99.36094141071672, 00:08:45.568 "io_failed": 0, 00:08:45.568 "io_timeout": 0, 00:08:45.568 "avg_latency_us": 5029.376107303782, 00:08:45.568 "min_latency_us": 2143.5733333333333, 00:08:45.568 "max_latency_us": 8137.386666666666 00:08:45.568 } 00:08:45.568 ], 00:08:45.568 "core_count": 1 00:08:45.568 } 00:08:45.568 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 422530 00:08:45.568 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 422530 ']' 00:08:45.568 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 422530 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422530 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422530' 00:08:45.829 killing process with pid 422530 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 422530 00:08:45.829 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.829 00:08:45.829 Latency(us) 00:08:45.829 [2024-11-20T14:18:34.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.829 [2024-11-20T14:18:34.789Z] =================================================================================================================== 00:08:45.829 [2024-11-20T14:18:34.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 422530 00:08:45.829 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.091 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 418185 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 418185 00:08:46.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 418185 Killed "${NVMF_APP[@]}" "$@" 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=424759 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 424759 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 424759 ']' 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.352 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.613 [2024-11-20 15:18:35.355953] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:08:46.613 [2024-11-20 15:18:35.356014] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.613 [2024-11-20 15:18:35.446670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.613 [2024-11-20 15:18:35.477101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.613 [2024-11-20 15:18:35.477130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.613 [2024-11-20 15:18:35.477136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.613 [2024-11-20 15:18:35.477142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.613 [2024-11-20 15:18:35.477146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.613 [2024-11-20 15:18:35.477637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.557 [2024-11-20 15:18:36.343362] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:47.557 [2024-11-20 15:18:36.343433] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:47.557 [2024-11-20 15:18:36.343455] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.557 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.819 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b acf4cbee-41cf-49a3-85cf-f2875d22559a -t 2000 00:08:47.819 [ 00:08:47.819 { 00:08:47.819 "name": "acf4cbee-41cf-49a3-85cf-f2875d22559a", 00:08:47.819 "aliases": [ 00:08:47.819 "lvs/lvol" 00:08:47.819 ], 00:08:47.819 "product_name": "Logical Volume", 00:08:47.819 "block_size": 4096, 00:08:47.819 "num_blocks": 38912, 00:08:47.819 "uuid": "acf4cbee-41cf-49a3-85cf-f2875d22559a", 00:08:47.819 "assigned_rate_limits": { 00:08:47.819 "rw_ios_per_sec": 0, 00:08:47.819 "rw_mbytes_per_sec": 0, 00:08:47.819 "r_mbytes_per_sec": 0, 00:08:47.819 "w_mbytes_per_sec": 0 00:08:47.819 }, 00:08:47.819 "claimed": false, 00:08:47.819 "zoned": false, 00:08:47.819 "supported_io_types": { 00:08:47.819 "read": true, 00:08:47.819 "write": true, 00:08:47.819 "unmap": true, 00:08:47.819 "flush": false, 00:08:47.819 "reset": true, 00:08:47.819 "nvme_admin": false, 00:08:47.819 "nvme_io": false, 00:08:47.819 "nvme_io_md": false, 00:08:47.819 "write_zeroes": true, 00:08:47.819 "zcopy": false, 00:08:47.819 "get_zone_info": false, 00:08:47.819 "zone_management": false, 00:08:47.819 "zone_append": false, 00:08:47.819 "compare": false, 00:08:47.819 "compare_and_write": false, 00:08:47.819 "abort": false, 00:08:47.819 "seek_hole": true, 00:08:47.819 "seek_data": true, 00:08:47.819 "copy": false, 00:08:47.819 "nvme_iov_md": false 00:08:47.819 }, 00:08:47.819 "driver_specific": { 00:08:47.819 "lvol": { 00:08:47.819 "lvol_store_uuid": "eea1ffdb-608e-496d-ab02-9b3620886167", 00:08:47.819 "base_bdev": "aio_bdev", 00:08:47.819 "thin_provision": false, 00:08:47.819 "num_allocated_clusters": 38, 00:08:47.819 "snapshot": false, 00:08:47.819 "clone": false, 00:08:47.819 "esnap_clone": false 00:08:47.819 } 00:08:47.819 } 00:08:47.819 } 00:08:47.819 ] 00:08:47.819 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:47.819 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:47.819 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:48.081 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:48.081 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:48.081 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.343 [2024-11-20 15:18:37.228234] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:48.343 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:48.604 request: 00:08:48.604 { 00:08:48.604 "uuid": "eea1ffdb-608e-496d-ab02-9b3620886167", 00:08:48.604 "method": "bdev_lvol_get_lvstores", 00:08:48.604 "req_id": 1 00:08:48.604 } 00:08:48.604 Got JSON-RPC error response 00:08:48.604 response: 00:08:48.604 { 00:08:48.604 "code": -19, 00:08:48.604 "message": "No such device" 00:08:48.604 } 00:08:48.604 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:48.604 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.604 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.604 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.604 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.864 aio_bdev 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.865 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b acf4cbee-41cf-49a3-85cf-f2875d22559a -t 2000 00:08:49.126 [ 00:08:49.126 { 00:08:49.126 "name": "acf4cbee-41cf-49a3-85cf-f2875d22559a", 00:08:49.126 "aliases": [ 00:08:49.126 "lvs/lvol" 00:08:49.126 ], 00:08:49.126 "product_name": "Logical Volume", 00:08:49.126 "block_size": 4096, 00:08:49.126 "num_blocks": 38912, 00:08:49.126 "uuid": "acf4cbee-41cf-49a3-85cf-f2875d22559a", 00:08:49.126 "assigned_rate_limits": { 00:08:49.126 "rw_ios_per_sec": 0, 00:08:49.126 "rw_mbytes_per_sec": 0, 00:08:49.126 "r_mbytes_per_sec": 0, 00:08:49.126 "w_mbytes_per_sec": 0 00:08:49.126 }, 00:08:49.126 "claimed": false, 00:08:49.126 "zoned": false, 00:08:49.126 "supported_io_types": { 00:08:49.126 "read": true, 00:08:49.126 "write": true, 00:08:49.126 "unmap": true, 00:08:49.126 "flush": false, 00:08:49.126 "reset": true, 00:08:49.126 "nvme_admin": false, 00:08:49.126 "nvme_io": false, 00:08:49.126 "nvme_io_md": false, 00:08:49.126 "write_zeroes": true, 00:08:49.126 "zcopy": false, 00:08:49.126 "get_zone_info": false, 00:08:49.126 "zone_management": false, 00:08:49.126 "zone_append": false, 00:08:49.126 "compare": false, 00:08:49.126 "compare_and_write": false, 00:08:49.126 "abort": false, 00:08:49.126 "seek_hole": true, 00:08:49.126 "seek_data": true, 00:08:49.126 "copy": false, 00:08:49.126 "nvme_iov_md": false 00:08:49.126 }, 00:08:49.126 "driver_specific": { 00:08:49.126 "lvol": { 00:08:49.126 "lvol_store_uuid": "eea1ffdb-608e-496d-ab02-9b3620886167", 00:08:49.126 "base_bdev": "aio_bdev", 00:08:49.126 "thin_provision": false, 00:08:49.126 "num_allocated_clusters": 38, 00:08:49.126 "snapshot": false, 00:08:49.126 "clone": false, 00:08:49.126 "esnap_clone": false 00:08:49.126 } 00:08:49.126 } 00:08:49.126 } 00:08:49.126 ] 00:08:49.126 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:49.126 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:49.126 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:49.386 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:49.386 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:49.386 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.386 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.386 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete acf4cbee-41cf-49a3-85cf-f2875d22559a 00:08:49.647 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eea1ffdb-608e-496d-ab02-9b3620886167 00:08:49.908 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.908 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.908 00:08:49.908 real 0m16.863s 00:08:49.908 user 0m44.566s 00:08:49.908 sys 0m2.966s 00:08:49.908 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.908 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.908 ************************************ 00:08:49.908 END TEST lvs_grow_dirty 00:08:49.908 ************************************ 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:50.169 nvmf_trace.0 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.169 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.169 rmmod nvme_tcp 00:08:50.169 rmmod nvme_fabrics 00:08:50.169 rmmod nvme_keyring 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 424759 ']' 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 424759 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 424759 ']' 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 424759 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 424759 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 424759' 00:08:50.169 killing process with pid 424759 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 424759 00:08:50.169 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 424759 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.429 15:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.343 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.343 00:08:52.343 real 0m44.325s 00:08:52.343 user 1m6.663s 00:08:52.343 sys 0m10.546s 00:08:52.343 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.343 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.343 ************************************ 00:08:52.343 END TEST nvmf_lvs_grow 00:08:52.343 ************************************ 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.605 ************************************ 00:08:52.605 START TEST nvmf_bdev_io_wait 00:08:52.605 ************************************ 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:52.605 * Looking for test storage... 00:08:52.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.605 --rc genhtml_branch_coverage=1 00:08:52.605 --rc genhtml_function_coverage=1 00:08:52.605 --rc genhtml_legend=1 00:08:52.605 --rc geninfo_all_blocks=1 00:08:52.605 --rc geninfo_unexecuted_blocks=1 00:08:52.605 00:08:52.605 ' 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.605 --rc genhtml_branch_coverage=1 00:08:52.605 --rc genhtml_function_coverage=1 00:08:52.605 --rc genhtml_legend=1 00:08:52.605 --rc geninfo_all_blocks=1 00:08:52.605 --rc geninfo_unexecuted_blocks=1 00:08:52.605 00:08:52.605 ' 00:08:52.605 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.605 --rc genhtml_branch_coverage=1 00:08:52.605 --rc genhtml_function_coverage=1 00:08:52.605 --rc genhtml_legend=1 00:08:52.605 --rc geninfo_all_blocks=1 00:08:52.606 --rc geninfo_unexecuted_blocks=1 00:08:52.606 00:08:52.606 ' 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.606 --rc genhtml_branch_coverage=1 00:08:52.606 --rc genhtml_function_coverage=1 00:08:52.606 --rc genhtml_legend=1 00:08:52.606 --rc geninfo_all_blocks=1 00:08:52.606 --rc geninfo_unexecuted_blocks=1 00:08:52.606 00:08:52.606 ' 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.606 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.867 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.868 15:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.012 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:01.013 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:01.013 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:01.013 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:01.013 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.013 15:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:09:01.013 00:09:01.013 --- 10.0.0.2 ping statistics --- 00:09:01.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.013 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:09:01.013 00:09:01.013 --- 10.0.0.1 ping statistics --- 00:09:01.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.013 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:01.013 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=429807 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 429807 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 429807 ']' 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.014 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.014 [2024-11-20 15:18:49.157627] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:01.014 [2024-11-20 15:18:49.157693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.014 [2024-11-20 15:18:49.257316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.014 [2024-11-20 15:18:49.312023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.014 [2024-11-20 15:18:49.312076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.014 [2024-11-20 15:18:49.312085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.014 [2024-11-20 15:18:49.312092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.014 [2024-11-20 15:18:49.312098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.014 [2024-11-20 15:18:49.314114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.014 [2024-11-20 15:18:49.314277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.014 [2024-11-20 15:18:49.314606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.014 [2024-11-20 15:18:49.314609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.276 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.276 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:01.276 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.276 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.276 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 [2024-11-20 15:18:50.114958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 Malloc0 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.276 [2024-11-20 15:18:50.180504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=430086 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=430088 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.276 { 00:09:01.276 "params": { 00:09:01.276 "name": "Nvme$subsystem", 00:09:01.276 "trtype": "$TEST_TRANSPORT", 00:09:01.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.276 "adrfam": "ipv4", 00:09:01.276 "trsvcid": "$NVMF_PORT", 00:09:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.276 "hdgst": ${hdgst:-false}, 00:09:01.276 "ddgst": ${ddgst:-false} 00:09:01.276 }, 00:09:01.276 "method": "bdev_nvme_attach_controller" 00:09:01.276 } 00:09:01.276 EOF 00:09:01.276 )") 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=430090 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.276 { 00:09:01.276 "params": { 00:09:01.276 "name": "Nvme$subsystem", 00:09:01.276 "trtype": "$TEST_TRANSPORT", 00:09:01.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.276 "adrfam": "ipv4", 00:09:01.276 "trsvcid": "$NVMF_PORT", 00:09:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.276 "hdgst": ${hdgst:-false}, 00:09:01.276 "ddgst": ${ddgst:-false} 00:09:01.276 }, 00:09:01.276 "method": "bdev_nvme_attach_controller" 00:09:01.276 } 00:09:01.276 EOF 00:09:01.276 )") 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=430093 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.276 { 00:09:01.276 "params": { 00:09:01.276 "name": "Nvme$subsystem", 00:09:01.276 "trtype": "$TEST_TRANSPORT", 00:09:01.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.276 "adrfam": "ipv4", 00:09:01.276 "trsvcid": "$NVMF_PORT", 00:09:01.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.276 "hdgst": ${hdgst:-false}, 00:09:01.276 "ddgst": ${ddgst:-false} 00:09:01.276 }, 00:09:01.276 "method": "bdev_nvme_attach_controller" 00:09:01.276 } 00:09:01.276 EOF 00:09:01.276 )") 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:01.276 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.277 { 00:09:01.277 "params": { 00:09:01.277 "name": "Nvme$subsystem", 00:09:01.277 "trtype": "$TEST_TRANSPORT", 00:09:01.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.277 "adrfam": "ipv4", 00:09:01.277 "trsvcid": "$NVMF_PORT", 00:09:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.277 "hdgst": ${hdgst:-false}, 00:09:01.277 "ddgst": ${ddgst:-false} 00:09:01.277 }, 00:09:01.277 "method": "bdev_nvme_attach_controller" 00:09:01.277 } 00:09:01.277 EOF 00:09:01.277 )") 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 430086 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.277 "params": { 00:09:01.277 "name": "Nvme1", 00:09:01.277 "trtype": "tcp", 00:09:01.277 "traddr": "10.0.0.2", 00:09:01.277 "adrfam": "ipv4", 00:09:01.277 "trsvcid": "4420", 00:09:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.277 "hdgst": false, 00:09:01.277 "ddgst": false 00:09:01.277 }, 00:09:01.277 "method": "bdev_nvme_attach_controller" 00:09:01.277 }' 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.277 "params": { 00:09:01.277 "name": "Nvme1", 00:09:01.277 "trtype": "tcp", 00:09:01.277 "traddr": "10.0.0.2", 00:09:01.277 "adrfam": "ipv4", 00:09:01.277 "trsvcid": "4420", 00:09:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.277 "hdgst": false, 00:09:01.277 "ddgst": false 00:09:01.277 }, 00:09:01.277 "method": "bdev_nvme_attach_controller" 00:09:01.277 }' 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.277 "params": { 00:09:01.277 "name": "Nvme1", 00:09:01.277 "trtype": "tcp", 00:09:01.277 "traddr": "10.0.0.2", 00:09:01.277 "adrfam": "ipv4", 00:09:01.277 "trsvcid": "4420", 00:09:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.277 "hdgst": false, 00:09:01.277 "ddgst": false 00:09:01.277 }, 00:09:01.277 "method": "bdev_nvme_attach_controller" 00:09:01.277 }' 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.277 15:18:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.277 "params": { 00:09:01.277 "name": "Nvme1", 00:09:01.277 "trtype": "tcp", 00:09:01.277 "traddr": "10.0.0.2", 00:09:01.277 "adrfam": "ipv4", 00:09:01.277 "trsvcid": "4420", 00:09:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.277 "hdgst": false, 00:09:01.277 "ddgst": false 00:09:01.277 }, 00:09:01.277 "method": "bdev_nvme_attach_controller" 00:09:01.277 }' 00:09:01.277 [2024-11-20 15:18:50.233248] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:01.277 [2024-11-20 15:18:50.233321] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:01.538 [2024-11-20 15:18:50.242190] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:01.538 [2024-11-20 15:18:50.242259] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:01.538 [2024-11-20 15:18:50.242618] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:01.538 [2024-11-20 15:18:50.242625] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:01.538 [2024-11-20 15:18:50.242679] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:01.538 [2024-11-20 15:18:50.242683] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:01.538 [2024-11-20 15:18:50.439050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.538 [2024-11-20 15:18:50.479405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.799 [2024-11-20 15:18:50.529586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.799 [2024-11-20 15:18:50.568244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:01.799 [2024-11-20 15:18:50.624135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.799 [2024-11-20 15:18:50.667471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:01.799 [2024-11-20 15:18:50.694734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.799 [2024-11-20 15:18:50.732668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:02.060 Running I/O for 1 seconds... 00:09:02.060 Running I/O for 1 seconds... 00:09:02.060 Running I/O for 1 seconds... 00:09:02.060 Running I/O for 1 seconds... 00:09:03.042 181496.00 IOPS, 708.97 MiB/s 00:09:03.042 Latency(us) 00:09:03.042 [2024-11-20T14:18:52.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.042 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:03.042 Nvme1n1 : 1.00 181135.57 707.56 0.00 0.00 702.61 302.08 1979.73 00:09:03.042 [2024-11-20T14:18:52.002Z] =================================================================================================================== 00:09:03.042 [2024-11-20T14:18:52.002Z] Total : 181135.57 707.56 0.00 0.00 702.61 302.08 1979.73 00:09:03.042 7574.00 IOPS, 29.59 MiB/s 00:09:03.042 Latency(us) 00:09:03.042 [2024-11-20T14:18:52.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.042 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:03.042 Nvme1n1 : 1.02 7606.19 29.71 0.00 0.00 16718.99 7318.19 30146.56 00:09:03.042 [2024-11-20T14:18:52.002Z] =================================================================================================================== 00:09:03.042 [2024-11-20T14:18:52.002Z] Total : 7606.19 29.71 0.00 0.00 16718.99 7318.19 30146.56 00:09:03.042 10880.00 IOPS, 42.50 MiB/s 00:09:03.042 Latency(us) 00:09:03.042 [2024-11-20T14:18:52.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.042 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:03.042 Nvme1n1 : 1.01 10913.51 42.63 0.00 0.00 11675.59 6990.51 23265.28 00:09:03.042 [2024-11-20T14:18:52.002Z] =================================================================================================================== 00:09:03.042 [2024-11-20T14:18:52.002Z] Total : 10913.51 42.63 0.00 0.00 11675.59 6990.51 23265.28 00:09:03.042 7292.00 IOPS, 28.48 MiB/s 00:09:03.042 Latency(us) 00:09:03.042 [2024-11-20T14:18:52.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.042 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:03.042 Nvme1n1 : 1.01 7404.56 28.92 0.00 0.00 17237.71 4341.76 41943.04 00:09:03.042 [2024-11-20T14:18:52.002Z] =================================================================================================================== 00:09:03.042 [2024-11-20T14:18:52.002Z] Total : 7404.56 28.92 0.00 0.00 17237.71 4341.76 41943.04 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 430088 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 430090 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 430093 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.303 rmmod nvme_tcp 00:09:03.303 rmmod nvme_fabrics 00:09:03.303 rmmod nvme_keyring 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 429807 ']' 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 429807 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 429807 ']' 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 429807 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429807 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429807' 00:09:03.303 killing process with pid 429807 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 429807 00:09:03.303 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 429807 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.564 15:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.109 00:09:06.109 real 0m13.119s 00:09:06.109 user 0m19.574s 00:09:06.109 sys 0m7.423s 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.109 ************************************ 00:09:06.109 END TEST nvmf_bdev_io_wait 00:09:06.109 ************************************ 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.109 ************************************ 00:09:06.109 START TEST nvmf_queue_depth 00:09:06.109 ************************************ 00:09:06.109 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:06.109 * Looking for test storage... 00:09:06.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.110 --rc genhtml_branch_coverage=1 00:09:06.110 --rc genhtml_function_coverage=1 00:09:06.110 --rc genhtml_legend=1 00:09:06.110 --rc geninfo_all_blocks=1 00:09:06.110 --rc geninfo_unexecuted_blocks=1 00:09:06.110 00:09:06.110 ' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.110 --rc genhtml_branch_coverage=1 00:09:06.110 --rc genhtml_function_coverage=1 00:09:06.110 --rc genhtml_legend=1 00:09:06.110 --rc geninfo_all_blocks=1 00:09:06.110 --rc geninfo_unexecuted_blocks=1 00:09:06.110 00:09:06.110 ' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.110 --rc genhtml_branch_coverage=1 00:09:06.110 --rc genhtml_function_coverage=1 00:09:06.110 --rc genhtml_legend=1 00:09:06.110 --rc geninfo_all_blocks=1 00:09:06.110 --rc geninfo_unexecuted_blocks=1 00:09:06.110 00:09:06.110 ' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.110 --rc genhtml_branch_coverage=1 00:09:06.110 --rc genhtml_function_coverage=1 00:09:06.110 --rc genhtml_legend=1 00:09:06.110 --rc geninfo_all_blocks=1 00:09:06.110 --rc geninfo_unexecuted_blocks=1 00:09:06.110 00:09:06.110 ' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.110 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.111 15:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.251 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:14.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:14.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:14.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:14.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.252 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:09:14.252 00:09:14.252 --- 10.0.0.2 ping statistics --- 00:09:14.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.252 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:14.252 00:09:14.252 --- 10.0.0.1 ping statistics --- 00:09:14.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.252 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.252 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=434786 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 434786 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 434786 ']' 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.253 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.253 [2024-11-20 15:19:02.358227] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:14.253 [2024-11-20 15:19:02.358288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.253 [2024-11-20 15:19:02.462350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.253 [2024-11-20 15:19:02.512780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.253 [2024-11-20 15:19:02.512833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.253 [2024-11-20 15:19:02.512843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.253 [2024-11-20 15:19:02.512850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.253 [2024-11-20 15:19:02.512856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.253 [2024-11-20 15:19:02.513630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.253 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.253 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:14.253 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.253 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.253 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 [2024-11-20 15:19:03.240920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 Malloc0 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 [2024-11-20 15:19:03.302279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=434987 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 434987 /var/tmp/bdevperf.sock 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 434987 ']' 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.515 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 [2024-11-20 15:19:03.361492] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:14.515 [2024-11-20 15:19:03.361558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434987 ] 00:09:14.515 [2024-11-20 15:19:03.454840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.776 [2024-11-20 15:19:03.509014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.347 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.347 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:15.347 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:15.348 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.348 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.609 NVMe0n1 00:09:15.609 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.609 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.609 Running I/O for 10 seconds... 00:09:17.941 8347.00 IOPS, 32.61 MiB/s [2024-11-20T14:19:07.842Z] 9728.00 IOPS, 38.00 MiB/s [2024-11-20T14:19:08.783Z] 10357.33 IOPS, 40.46 MiB/s [2024-11-20T14:19:09.731Z] 11003.00 IOPS, 42.98 MiB/s [2024-11-20T14:19:10.681Z] 11467.00 IOPS, 44.79 MiB/s [2024-11-20T14:19:11.623Z] 11774.67 IOPS, 45.99 MiB/s [2024-11-20T14:19:12.567Z] 12040.29 IOPS, 47.03 MiB/s [2024-11-20T14:19:13.509Z] 12208.00 IOPS, 47.69 MiB/s [2024-11-20T14:19:14.894Z] 12332.11 IOPS, 48.17 MiB/s [2024-11-20T14:19:14.894Z] 12457.60 IOPS, 48.66 MiB/s 00:09:25.934 Latency(us) 00:09:25.934 [2024-11-20T14:19:14.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.934 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:25.934 Verification LBA range: start 0x0 length 0x4000 00:09:25.934 NVMe0n1 : 10.06 12478.64 48.74 0.00 0.00 81738.31 22500.69 76458.67 00:09:25.934 [2024-11-20T14:19:14.894Z] =================================================================================================================== 00:09:25.934 [2024-11-20T14:19:14.894Z] Total : 12478.64 48.74 0.00 0.00 81738.31 22500.69 76458.67 00:09:25.934 { 00:09:25.934 "results": [ 00:09:25.934 { 00:09:25.934 "job": "NVMe0n1", 00:09:25.934 "core_mask": "0x1", 00:09:25.934 "workload": "verify", 00:09:25.934 "status": "finished", 00:09:25.934 "verify_range": { 00:09:25.934 "start": 0, 00:09:25.934 "length": 16384 00:09:25.934 }, 00:09:25.934 "queue_depth": 1024, 00:09:25.934 "io_size": 4096, 00:09:25.934 "runtime": 10.061436, 00:09:25.934 "iops": 12478.636250332458, 00:09:25.934 "mibps": 48.744672852861164, 00:09:25.934 "io_failed": 0, 00:09:25.934 "io_timeout": 0, 00:09:25.934 "avg_latency_us": 81738.3109360987, 00:09:25.934 "min_latency_us": 22500.693333333333, 00:09:25.934 "max_latency_us": 76458.66666666667 00:09:25.934 } 00:09:25.934 ], 00:09:25.934 "core_count": 1 00:09:25.934 } 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 434987 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 434987 ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 434987 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434987 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434987' 00:09:25.934 killing process with pid 434987 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 434987 00:09:25.934 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.934 00:09:25.934 Latency(us) 00:09:25.934 [2024-11-20T14:19:14.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.934 [2024-11-20T14:19:14.894Z] =================================================================================================================== 00:09:25.934 [2024-11-20T14:19:14.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 434987 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.934 rmmod nvme_tcp 00:09:25.934 rmmod nvme_fabrics 00:09:25.934 rmmod nvme_keyring 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 434786 ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 434786 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 434786 ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 434786 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434786 00:09:25.934 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.935 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.935 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434786' 00:09:25.935 killing process with pid 434786 00:09:25.935 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 434786 00:09:25.935 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 434786 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.196 15:19:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.196 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.196 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.196 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.196 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.196 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:28.741 00:09:28.741 real 0m22.528s 00:09:28.741 user 0m25.933s 00:09:28.741 sys 0m7.020s 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.741 ************************************ 00:09:28.741 END TEST nvmf_queue_depth 00:09:28.741 ************************************ 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.741 ************************************ 00:09:28.741 START TEST nvmf_target_multipath 00:09:28.741 ************************************ 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:28.741 * Looking for test storage... 00:09:28.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.741 --rc genhtml_branch_coverage=1 00:09:28.741 --rc genhtml_function_coverage=1 00:09:28.741 --rc genhtml_legend=1 00:09:28.741 --rc geninfo_all_blocks=1 00:09:28.741 --rc geninfo_unexecuted_blocks=1 00:09:28.741 00:09:28.741 ' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.741 --rc genhtml_branch_coverage=1 00:09:28.741 --rc genhtml_function_coverage=1 00:09:28.741 --rc genhtml_legend=1 00:09:28.741 --rc geninfo_all_blocks=1 00:09:28.741 --rc geninfo_unexecuted_blocks=1 00:09:28.741 00:09:28.741 ' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.741 --rc genhtml_branch_coverage=1 00:09:28.741 --rc genhtml_function_coverage=1 00:09:28.741 --rc genhtml_legend=1 00:09:28.741 --rc geninfo_all_blocks=1 00:09:28.741 --rc geninfo_unexecuted_blocks=1 00:09:28.741 00:09:28.741 ' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.741 --rc genhtml_branch_coverage=1 00:09:28.741 --rc genhtml_function_coverage=1 00:09:28.741 --rc genhtml_legend=1 00:09:28.741 --rc geninfo_all_blocks=1 00:09:28.741 --rc geninfo_unexecuted_blocks=1 00:09:28.741 00:09:28.741 ' 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.741 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.742 15:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:36.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.881 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:36.882 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:36.882 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:36.882 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:36.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:09:36.882 00:09:36.882 --- 10.0.0.2 ping statistics --- 00:09:36.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.882 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:09:36.882 00:09:36.882 --- 10.0.0.1 ping statistics --- 00:09:36.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.882 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:36.882 only one NIC for nvmf test 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.882 rmmod nvme_tcp 00:09:36.882 rmmod nvme_fabrics 00:09:36.882 rmmod nvme_keyring 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.882 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.268 00:09:38.268 real 0m9.913s 00:09:38.268 user 0m2.128s 00:09:38.268 sys 0m5.745s 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:38.268 ************************************ 00:09:38.268 END TEST nvmf_target_multipath 00:09:38.268 ************************************ 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.268 ************************************ 00:09:38.268 START TEST nvmf_zcopy 00:09:38.268 ************************************ 00:09:38.268 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:38.530 * Looking for test storage... 00:09:38.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.531 --rc genhtml_branch_coverage=1 00:09:38.531 --rc genhtml_function_coverage=1 00:09:38.531 --rc genhtml_legend=1 00:09:38.531 --rc geninfo_all_blocks=1 00:09:38.531 --rc geninfo_unexecuted_blocks=1 00:09:38.531 00:09:38.531 ' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.531 --rc genhtml_branch_coverage=1 00:09:38.531 --rc genhtml_function_coverage=1 00:09:38.531 --rc genhtml_legend=1 00:09:38.531 --rc geninfo_all_blocks=1 00:09:38.531 --rc geninfo_unexecuted_blocks=1 00:09:38.531 00:09:38.531 ' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.531 --rc genhtml_branch_coverage=1 00:09:38.531 --rc genhtml_function_coverage=1 00:09:38.531 --rc genhtml_legend=1 00:09:38.531 --rc geninfo_all_blocks=1 00:09:38.531 --rc geninfo_unexecuted_blocks=1 00:09:38.531 00:09:38.531 ' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.531 --rc genhtml_branch_coverage=1 00:09:38.531 --rc genhtml_function_coverage=1 00:09:38.531 --rc genhtml_legend=1 00:09:38.531 --rc geninfo_all_blocks=1 00:09:38.531 --rc geninfo_unexecuted_blocks=1 00:09:38.531 00:09:38.531 ' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:38.531 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.532 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.826 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.826 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:46.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:46.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:46.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:46.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:09:46.827 00:09:46.827 --- 10.0.0.2 ping statistics --- 00:09:46.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.827 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:09:46.827 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:09:46.828 00:09:46.828 --- 10.0.0.1 ping statistics --- 00:09:46.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.828 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=445825 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 445825 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 445825 ']' 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.828 15:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.828 [2024-11-20 15:19:35.040019] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:46.828 [2024-11-20 15:19:35.040083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.828 [2024-11-20 15:19:35.141278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.828 [2024-11-20 15:19:35.191254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.828 [2024-11-20 15:19:35.191305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.828 [2024-11-20 15:19:35.191314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.828 [2024-11-20 15:19:35.191321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.828 [2024-11-20 15:19:35.191327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.828 [2024-11-20 15:19:35.192120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 [2024-11-20 15:19:35.921782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 [2024-11-20 15:19:35.946098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 malloc0 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.090 { 00:09:47.090 "params": { 00:09:47.090 "name": "Nvme$subsystem", 00:09:47.090 "trtype": "$TEST_TRANSPORT", 00:09:47.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.090 "adrfam": "ipv4", 00:09:47.090 "trsvcid": "$NVMF_PORT", 00:09:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.090 "hdgst": ${hdgst:-false}, 00:09:47.090 "ddgst": ${ddgst:-false} 00:09:47.090 }, 00:09:47.090 "method": "bdev_nvme_attach_controller" 00:09:47.090 } 00:09:47.090 EOF 00:09:47.090 )") 00:09:47.090 15:19:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:47.090 15:19:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:47.090 15:19:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:47.090 15:19:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.090 "params": { 00:09:47.090 "name": "Nvme1", 00:09:47.090 "trtype": "tcp", 00:09:47.090 "traddr": "10.0.0.2", 00:09:47.091 "adrfam": "ipv4", 00:09:47.091 "trsvcid": "4420", 00:09:47.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.091 "hdgst": false, 00:09:47.091 "ddgst": false 00:09:47.091 }, 00:09:47.091 "method": "bdev_nvme_attach_controller" 00:09:47.091 }' 00:09:47.091 [2024-11-20 15:19:36.048324] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:47.091 [2024-11-20 15:19:36.048394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445879 ] 00:09:47.352 [2024-11-20 15:19:36.140292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.352 [2024-11-20 15:19:36.193368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.613 Running I/O for 10 seconds... 00:09:49.500 6393.00 IOPS, 49.95 MiB/s [2024-11-20T14:19:39.403Z] 6451.50 IOPS, 50.40 MiB/s [2024-11-20T14:19:40.789Z] 6473.67 IOPS, 50.58 MiB/s [2024-11-20T14:19:41.733Z] 6483.75 IOPS, 50.65 MiB/s [2024-11-20T14:19:42.677Z] 6904.20 IOPS, 53.94 MiB/s [2024-11-20T14:19:43.620Z] 7367.50 IOPS, 57.56 MiB/s [2024-11-20T14:19:44.563Z] 7696.29 IOPS, 60.13 MiB/s [2024-11-20T14:19:45.507Z] 7944.50 IOPS, 62.07 MiB/s [2024-11-20T14:19:46.449Z] 8139.67 IOPS, 63.59 MiB/s [2024-11-20T14:19:46.449Z] 8295.20 IOPS, 64.81 MiB/s 00:09:57.489 Latency(us) 00:09:57.489 [2024-11-20T14:19:46.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.489 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:57.489 Verification LBA range: start 0x0 length 0x1000 00:09:57.489 Nvme1n1 : 10.01 8298.60 64.83 0.00 0.00 15378.02 914.77 29054.29 00:09:57.489 [2024-11-20T14:19:46.449Z] =================================================================================================================== 00:09:57.489 [2024-11-20T14:19:46.449Z] Total : 8298.60 64.83 0.00 0.00 15378.02 914.77 29054.29 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=447912 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.751 { 00:09:57.751 "params": { 00:09:57.751 "name": "Nvme$subsystem", 00:09:57.751 "trtype": "$TEST_TRANSPORT", 00:09:57.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.751 "adrfam": "ipv4", 00:09:57.751 "trsvcid": "$NVMF_PORT", 00:09:57.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.751 "hdgst": ${hdgst:-false}, 00:09:57.751 "ddgst": ${ddgst:-false} 00:09:57.751 }, 00:09:57.751 "method": "bdev_nvme_attach_controller" 00:09:57.751 } 00:09:57.751 EOF 00:09:57.751 )") 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:57.751 [2024-11-20 15:19:46.504922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.504951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:57.751 15:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.751 "params": { 00:09:57.751 "name": "Nvme1", 00:09:57.751 "trtype": "tcp", 00:09:57.751 "traddr": "10.0.0.2", 00:09:57.751 "adrfam": "ipv4", 00:09:57.751 "trsvcid": "4420", 00:09:57.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.751 "hdgst": false, 00:09:57.751 "ddgst": false 00:09:57.751 }, 00:09:57.751 "method": "bdev_nvme_attach_controller" 00:09:57.751 }' 00:09:57.751 [2024-11-20 15:19:46.516916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.516926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.528945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.528954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.540975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.540984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.547185] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:09:57.751 [2024-11-20 15:19:46.547236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447912 ] 00:09:57.751 [2024-11-20 15:19:46.553005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.553014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.565034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.565043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.577063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.577072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.589094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.589103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.601125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.601134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.613156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.613169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.625188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.625197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.630492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.751 [2024-11-20 15:19:46.637220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.637229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.649250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.751 [2024-11-20 15:19:46.649260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.751 [2024-11-20 15:19:46.660008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.752 [2024-11-20 15:19:46.661281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.752 [2024-11-20 15:19:46.661289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.752 [2024-11-20 15:19:46.673316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.752 [2024-11-20 15:19:46.673327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.752 [2024-11-20 15:19:46.685345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.752 [2024-11-20 15:19:46.685357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.752 [2024-11-20 15:19:46.697385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.752 [2024-11-20 15:19:46.697395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.752 [2024-11-20 15:19:46.709403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.752 [2024-11-20 15:19:46.709413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.721433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.721441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.733476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.733496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.745497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.745508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.757531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.757542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.769559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.769568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.781592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.781600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.793627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.793635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.805658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.805669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.817686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.817694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.829718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.829725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.841750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.841758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.853782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.853792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.865812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.865820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.877842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.877850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.889873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.889881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.901906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.901916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.913936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.913944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.925968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.925976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.938000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.938009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 [2024-11-20 15:19:46.950039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.950055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.013 Running I/O for 5 seconds... 00:09:58.013 [2024-11-20 15:19:46.962065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.013 [2024-11-20 15:19:46.962074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.274 [2024-11-20 15:19:46.977786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.274 [2024-11-20 15:19:46.977803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.274 [2024-11-20 15:19:46.990566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.274 [2024-11-20 15:19:46.990583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.274 [2024-11-20 15:19:47.003605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.274 [2024-11-20 15:19:47.003623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.274 [2024-11-20 15:19:47.016473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.274 [2024-11-20 15:19:47.016489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.274 [2024-11-20 15:19:47.029415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.274 [2024-11-20 15:19:47.029430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.042132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.042147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.054831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.054847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.067896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.067913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.081237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.081253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.094533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.094548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.107378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.107393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.120648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.120663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.134141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.134156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.147635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.147654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.160870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.160885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.173704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.173718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.187261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.187276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.200632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.200647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.213666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.213682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.275 [2024-11-20 15:19:47.226400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.275 [2024-11-20 15:19:47.226415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.536 [2024-11-20 15:19:47.239856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.536 [2024-11-20 15:19:47.239871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.536 [2024-11-20 15:19:47.253031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.536 [2024-11-20 15:19:47.253046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.536 [2024-11-20 15:19:47.266000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.536 [2024-11-20 15:19:47.266015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.536 [2024-11-20 15:19:47.279550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.536 [2024-11-20 15:19:47.279566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.536 [2024-11-20 15:19:47.292434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.292449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.306066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.306081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.319313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.319328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.332764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.332779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.345684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.345699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.358267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.358281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.371498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.371513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.384573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.384588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.397068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.397087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.410240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.410254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.423812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.423826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.436785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.436800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.450472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.450487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.463016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.463031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.475747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.475761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.537 [2024-11-20 15:19:47.488401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.537 [2024-11-20 15:19:47.488415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.501668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.501683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.515378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.515393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.527812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.527827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.540640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.540655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.553405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.553420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.566990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.567005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.580283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-11-20 15:19:47.580299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-11-20 15:19:47.594284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.594299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.607436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.607451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.620562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.620577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.633182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.633196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.645922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.645943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.658576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.658591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.671710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.671726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.684476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.684491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.697021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.697037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.710214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.710229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.723134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.723149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.736726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.736742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-11-20 15:19:47.750288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-11-20 15:19:47.750305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.763356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.763372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.776405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.776420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.790067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.790082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.802642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.802657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.815235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.815251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.828279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.828294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.841076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.841091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.853841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.853856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.866724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.866739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.880425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.880439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.893867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.893886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.906855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.906870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.919783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.919798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.933531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.933546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.946074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.946089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.959541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.959556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 19051.00 IOPS, 148.84 MiB/s [2024-11-20T14:19:48.020Z] [2024-11-20 15:19:47.972802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.972817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.985219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.985234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:47.998444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:47.998459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.060 [2024-11-20 15:19:48.011788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.060 [2024-11-20 15:19:48.011803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.025585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.025600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.038495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.038511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.050969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.050984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.064356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.064371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.076979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.076994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.090012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.090029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.103321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.103336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.116600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.116616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.129719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.129735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.143447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.143463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.156061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.156077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.168876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.168891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.182104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.182120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.194830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.194846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.207976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.207992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.220599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.220615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.233929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.233944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.247599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.247615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.260893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.260908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.322 [2024-11-20 15:19:48.274330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.322 [2024-11-20 15:19:48.274345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.287617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.287633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.300802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.300817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.314189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.314204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.327796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.327811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.341145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.341165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.353734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.353750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.365827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.365842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.379129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.379145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.392032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.392046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.405339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.405354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.417862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.417877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.431257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.431272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.444254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.444270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.457380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.457395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.470660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.470675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.483558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.483573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.496993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.497009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.509868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.509883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.522458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.522473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.584 [2024-11-20 15:19:48.536035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.584 [2024-11-20 15:19:48.536050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.549466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.549482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.562128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.562144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.575602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.575617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.588457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.588472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.601189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.601204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.613454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.613470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.627150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.627169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.640176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.640191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.653100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.653115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.666005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.666021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.678833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.678848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.692652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.692668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.705562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.705577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.719041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.719057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.731665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.731681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.745565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.745580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.759130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.759145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.772469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.772485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.785768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.785783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.845 [2024-11-20 15:19:48.799225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.845 [2024-11-20 15:19:48.799241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.812340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.812356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.825622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.825638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.838901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.838916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.852213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.852228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.865339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.865354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.878644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.878663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.891619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.891634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.905078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.905094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.917529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.917544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.930296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.930311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.943595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.943610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.957092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.957107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 19106.50 IOPS, 149.27 MiB/s [2024-11-20T14:19:49.067Z] [2024-11-20 15:19:48.969824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.969839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.982042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.982057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:48.995100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:48.995115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:49.008360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:49.008375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:49.021956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:49.021971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:49.035057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:49.035072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:49.047499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:49.047513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.107 [2024-11-20 15:19:49.060812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.107 [2024-11-20 15:19:49.060827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.367 [2024-11-20 15:19:49.074320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.367 [2024-11-20 15:19:49.074335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.367 [2024-11-20 15:19:49.087120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.367 [2024-11-20 15:19:49.087135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.367 [2024-11-20 15:19:49.099448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.367 [2024-11-20 15:19:49.099462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.367 [2024-11-20 15:19:49.112586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.367 [2024-11-20 15:19:49.112601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.367 [2024-11-20 15:19:49.125859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.367 [2024-11-20 15:19:49.125879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.139468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.139484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.151968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.151983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.165347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.165362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.178361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.178375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.191868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.191883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.204628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.204643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.217625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.217640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.230766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.230781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.243628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.243643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.256610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.256625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.269599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.269614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.283200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.283215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.296988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.297003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.309313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.309328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.368 [2024-11-20 15:19:49.322098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.368 [2024-11-20 15:19:49.322113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.334980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.334994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.348326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.348340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.361906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.361921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.374522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.374540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.387473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.387487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.400321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.400336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.413655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.413670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.427373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.427387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.440186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.440201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.453697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.453712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.466632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.466647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.629 [2024-11-20 15:19:49.479155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.629 [2024-11-20 15:19:49.479174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.492167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.492182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.505567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.505582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.518497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.518512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.532167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.532182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.545243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.545258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.558642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.558657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.571917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.571932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.630 [2024-11-20 15:19:49.585450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.630 [2024-11-20 15:19:49.585465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.599081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.599096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.611789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.611803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.624457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.624471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.637183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.637199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.649729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.649744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.663565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.663581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.676910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.676925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.689499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.689514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.702499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.702514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.715442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.715457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.727602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.727617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.740845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.740860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.753856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.753871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.767031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.767045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.780446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.780461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.793904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.793919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.807134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.807149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.820790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.820805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.833828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.833843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.891 [2024-11-20 15:19:49.846561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.891 [2024-11-20 15:19:49.846576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.152 [2024-11-20 15:19:49.859040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.152 [2024-11-20 15:19:49.859056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.152 [2024-11-20 15:19:49.872172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.152 [2024-11-20 15:19:49.872188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.152 [2024-11-20 15:19:49.885037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.885053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.897887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.897902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.910868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.910882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.923739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.923754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.936684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.936699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.950258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.950273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.963830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.963845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 19132.33 IOPS, 149.47 MiB/s [2024-11-20T14:19:50.113Z] [2024-11-20 15:19:49.976892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.976907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:49.989635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:49.989651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.003365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.003381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.016709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.016724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.030248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.030263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.043376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.043391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.056917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.056932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.070377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.070392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.083508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.083523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.096224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.096239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.153 [2024-11-20 15:19:50.109695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.153 [2024-11-20 15:19:50.109713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.123299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.123314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.136909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.136925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.149924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.149939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.163165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.163181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.176354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.176368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.189551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.189565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.202024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.202039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.215040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.215055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.228117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.228131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.241434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.241449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.253983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.253998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.266844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.266859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.280237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.280253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.293085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.293100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.305845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.305860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.318749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.318764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.332053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.332068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.344962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.344977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.358283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.358301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.415 [2024-11-20 15:19:50.371695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.415 [2024-11-20 15:19:50.371709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.384354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.384368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.397156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.397175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.409589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.409603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.422276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.422291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.434950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.434965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.447973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.447988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.461409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.461423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.474777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.474793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.488402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.488417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.500749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.500764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.514389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.514404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.527476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.527490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.540619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.540634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.553653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.553668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.566822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.566836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.579202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.579216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.592927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.592942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.605592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.605610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.618453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.618468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.677 [2024-11-20 15:19:50.631508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.677 [2024-11-20 15:19:50.631523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.644606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.644621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.657315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.657330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.669669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.669683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.682944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.682959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.696130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.696145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.709231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.709247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.721867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.721882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.735214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.735229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.748533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.748547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.761885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.761900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.775157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.775175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.787905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.787919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.801102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.801116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.814852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.814867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.828047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.828062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.840593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.840608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.853630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.853648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.866894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.866908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.880633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.880647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.938 [2024-11-20 15:19:50.893438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.938 [2024-11-20 15:19:50.893452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.906612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.906627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.919530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.919545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.932379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.932393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.945505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.945519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.958746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.958761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.971648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.971662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 19120.75 IOPS, 149.38 MiB/s [2024-11-20T14:19:51.159Z] [2024-11-20 15:19:50.984878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.984893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:50.998097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:50.998112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.011210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.011225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.024743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.024757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.038517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.038531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.051297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.051312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.063767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.063782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.077141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.077155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.090108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.090123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.102689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.102703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.115365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.115380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.128573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.128587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.141472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.141486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 15:19:51.154106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 15:19:51.154120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.167548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.167562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.180344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.180359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.193592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.193608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.207291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.207305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.220774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.220789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.234298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.234313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.247260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.460 [2024-11-20 15:19:51.247275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.460 [2024-11-20 15:19:51.260406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.260421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.273821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.273835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.287081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.287095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.300374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.300389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.313470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.313485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.326542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.326557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.339777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.339791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.352431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.352446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.365982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.365997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.379582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.379596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.393195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.393209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.461 [2024-11-20 15:19:51.406482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.461 [2024-11-20 15:19:51.406496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.420055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.420069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.432859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.432874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.445407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.445421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.459066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.459081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.472518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.472533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.486069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.486084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.498969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.498984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.512601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.512616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.525568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.525583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.538801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.538816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.551621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.551636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.564638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.564653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.577953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.577968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.590974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.590989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.603162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.722 [2024-11-20 15:19:51.603177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.722 [2024-11-20 15:19:51.616319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.723 [2024-11-20 15:19:51.616334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.723 [2024-11-20 15:19:51.629509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.723 [2024-11-20 15:19:51.629524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.723 [2024-11-20 15:19:51.642626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.723 [2024-11-20 15:19:51.642642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.723 [2024-11-20 15:19:51.656304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.723 [2024-11-20 15:19:51.656319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.723 [2024-11-20 15:19:51.668943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.723 [2024-11-20 15:19:51.668958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.682255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.682270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.695597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.695612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.708652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.708667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.721893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.721908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.734930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.734945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.747912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.747927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.761182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.761197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.773960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.773975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.786629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.786645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.800275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.800290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.812981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.812995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.826442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.826456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.839494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.839513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.852398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.852413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.865442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.865457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.984 [2024-11-20 15:19:51.878623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.984 [2024-11-20 15:19:51.878637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.985 [2024-11-20 15:19:51.892190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.985 [2024-11-20 15:19:51.892205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.985 [2024-11-20 15:19:51.904843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.985 [2024-11-20 15:19:51.904857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.985 [2024-11-20 15:19:51.917136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.985 [2024-11-20 15:19:51.917151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.985 [2024-11-20 15:19:51.930086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.985 [2024-11-20 15:19:51.930101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.985 [2024-11-20 15:19:51.943360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.985 [2024-11-20 15:19:51.943375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:51.956673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:51.956688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:51.970399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:51.970414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 19135.60 IOPS, 149.50 MiB/s [2024-11-20T14:19:52.206Z] [2024-11-20 15:19:51.980804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:51.980819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 00:10:03.246 Latency(us) 00:10:03.246 [2024-11-20T14:19:52.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.246 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:03.246 Nvme1n1 : 5.01 19147.10 149.59 0.00 0.00 6678.63 2935.47 14199.47 00:10:03.246 [2024-11-20T14:19:52.206Z] =================================================================================================================== 00:10:03.246 [2024-11-20T14:19:52.206Z] Total : 19147.10 149.59 0.00 0.00 6678.63 2935.47 14199.47 00:10:03.246 [2024-11-20 15:19:51.992144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:51.992162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.004183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.004195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.016212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.016224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.028239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.028249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.040268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.040283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.052297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.052306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.064330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.064340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 [2024-11-20 15:19:52.076358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.246 [2024-11-20 15:19:52.076367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (447912) - No such process 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 447912 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.246 delay0 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.246 15:19:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:03.507 [2024-11-20 15:19:52.244576] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:11.644 Initializing NVMe Controllers 00:10:11.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.644 Initialization complete. Launching workers. 00:10:11.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 33620 00:10:11.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33747, failed to submit 115 00:10:11.644 success 33650, unsuccessful 97, failed 0 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.644 rmmod nvme_tcp 00:10:11.644 rmmod nvme_fabrics 00:10:11.644 rmmod nvme_keyring 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 445825 ']' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 445825 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 445825 ']' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 445825 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445825 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445825' 00:10:11.644 killing process with pid 445825 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 445825 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 445825 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.644 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.029 00:10:13.029 real 0m34.454s 00:10:13.029 user 0m44.995s 00:10:13.029 sys 0m12.155s 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.029 ************************************ 00:10:13.029 END TEST nvmf_zcopy 00:10:13.029 ************************************ 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.029 ************************************ 00:10:13.029 START TEST nvmf_nmic 00:10:13.029 ************************************ 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.029 * Looking for test storage... 00:10:13.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.029 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.030 --rc genhtml_branch_coverage=1 00:10:13.030 --rc genhtml_function_coverage=1 00:10:13.030 --rc genhtml_legend=1 00:10:13.030 --rc geninfo_all_blocks=1 00:10:13.030 --rc geninfo_unexecuted_blocks=1 00:10:13.030 00:10:13.030 ' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.030 --rc genhtml_branch_coverage=1 00:10:13.030 --rc genhtml_function_coverage=1 00:10:13.030 --rc genhtml_legend=1 00:10:13.030 --rc geninfo_all_blocks=1 00:10:13.030 --rc geninfo_unexecuted_blocks=1 00:10:13.030 00:10:13.030 ' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.030 --rc genhtml_branch_coverage=1 00:10:13.030 --rc genhtml_function_coverage=1 00:10:13.030 --rc genhtml_legend=1 00:10:13.030 --rc geninfo_all_blocks=1 00:10:13.030 --rc geninfo_unexecuted_blocks=1 00:10:13.030 00:10:13.030 ' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.030 --rc genhtml_branch_coverage=1 00:10:13.030 --rc genhtml_function_coverage=1 00:10:13.030 --rc genhtml_legend=1 00:10:13.030 --rc geninfo_all_blocks=1 00:10:13.030 --rc geninfo_unexecuted_blocks=1 00:10:13.030 00:10:13.030 ' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.030 15:20:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:21.175 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:21.176 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:21.176 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:21.176 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:21.176 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.176 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:10:21.176 00:10:21.176 --- 10.0.0.2 ping statistics --- 00:10:21.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.176 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:10:21.176 00:10:21.176 --- 10.0.0.1 ping statistics --- 00:10:21.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.176 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=454834 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 454834 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 454834 ']' 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.176 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.177 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.177 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.177 15:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.177 [2024-11-20 15:20:09.380884] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:10:21.177 [2024-11-20 15:20:09.380948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.177 [2024-11-20 15:20:09.482886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.177 [2024-11-20 15:20:09.536824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.177 [2024-11-20 15:20:09.536879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.177 [2024-11-20 15:20:09.536887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.177 [2024-11-20 15:20:09.536894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.177 [2024-11-20 15:20:09.536901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.177 [2024-11-20 15:20:09.539295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.177 [2024-11-20 15:20:09.539638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.177 [2024-11-20 15:20:09.539799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.177 [2024-11-20 15:20:09.539802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.438 [2024-11-20 15:20:10.236541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.438 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 Malloc0 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 [2024-11-20 15:20:10.309547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:21.439 test case1: single bdev can't be used in multiple subsystems 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 [2024-11-20 15:20:10.345438] bdev.c:8278:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:21.439 [2024-11-20 15:20:10.345459] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:21.439 [2024-11-20 15:20:10.345467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.439 request: 00:10:21.439 { 00:10:21.439 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:21.439 "namespace": { 00:10:21.439 "bdev_name": "Malloc0", 00:10:21.439 "no_auto_visible": false 00:10:21.439 }, 00:10:21.439 "method": "nvmf_subsystem_add_ns", 00:10:21.439 "req_id": 1 00:10:21.439 } 00:10:21.439 Got JSON-RPC error response 00:10:21.439 response: 00:10:21.439 { 00:10:21.439 "code": -32602, 00:10:21.439 "message": "Invalid parameters" 00:10:21.439 } 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:21.439 Adding namespace failed - expected result. 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:21.439 test case2: host connect to nvmf target in multiple paths 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 [2024-11-20 15:20:10.357606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.439 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.356 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:24.742 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.742 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:24.742 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.742 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:24.742 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:26.654 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.654 [global] 00:10:26.654 thread=1 00:10:26.654 invalidate=1 00:10:26.654 rw=write 00:10:26.654 time_based=1 00:10:26.654 runtime=1 00:10:26.654 ioengine=libaio 00:10:26.654 direct=1 00:10:26.654 bs=4096 00:10:26.654 iodepth=1 00:10:26.654 norandommap=0 00:10:26.654 numjobs=1 00:10:26.654 00:10:26.654 verify_dump=1 00:10:26.654 verify_backlog=512 00:10:26.654 verify_state_save=0 00:10:26.654 do_verify=1 00:10:26.654 verify=crc32c-intel 00:10:26.654 [job0] 00:10:26.654 filename=/dev/nvme0n1 00:10:26.654 Could not set queue depth (nvme0n1) 00:10:26.915 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.915 fio-3.35 00:10:26.915 Starting 1 thread 00:10:28.302 00:10:28.302 job0: (groupid=0, jobs=1): err= 0: pid=456173: Wed Nov 20 15:20:17 2024 00:10:28.302 read: IOPS=17, BW=69.9KiB/s (71.6kB/s)(72.0KiB/1030msec) 00:10:28.302 slat (nsec): min=7633, max=26878, avg=25523.28, stdev=4468.76 00:10:28.302 clat (usec): min=901, max=42022, avg=39449.00, stdev=9628.80 00:10:28.302 lat (usec): min=928, max=42049, avg=39474.52, stdev=9628.47 00:10:28.302 clat percentiles (usec): 00:10:28.302 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[41157], 20.00th=[41157], 00:10:28.302 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:28.302 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:28.302 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:28.302 | 99.99th=[42206] 00:10:28.302 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:28.302 slat (nsec): min=9001, max=68175, avg=30325.95, stdev=10200.35 00:10:28.302 clat (usec): min=193, max=816, avg=586.17, stdev=94.29 00:10:28.302 lat (usec): min=204, max=858, avg=616.50, stdev=98.91 00:10:28.302 clat percentiles (usec): 00:10:28.302 | 1.00th=[ 363], 5.00th=[ 429], 10.00th=[ 461], 20.00th=[ 506], 00:10:28.302 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:10:28.302 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 734], 00:10:28.302 | 99.00th=[ 783], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:10:28.302 | 99.99th=[ 816] 00:10:28.302 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:28.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:28.302 lat (usec) : 250=0.38%, 500=16.60%, 750=77.17%, 1000=2.64% 00:10:28.302 lat (msec) : 50=3.21% 00:10:28.302 cpu : usr=1.17%, sys=1.75%, ctx=530, majf=0, minf=1 00:10:28.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.302 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.302 00:10:28.302 Run status group 0 (all jobs): 00:10:28.302 READ: bw=69.9KiB/s (71.6kB/s), 69.9KiB/s-69.9KiB/s (71.6kB/s-71.6kB/s), io=72.0KiB (73.7kB), run=1030-1030msec 00:10:28.302 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:10:28.302 00:10:28.302 Disk stats (read/write): 00:10:28.302 nvme0n1: ios=64/512, merge=0/0, ticks=595/230, in_queue=825, util=93.69% 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.302 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.302 rmmod nvme_tcp 00:10:28.302 rmmod nvme_fabrics 00:10:28.302 rmmod nvme_keyring 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 454834 ']' 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 454834 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 454834 ']' 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 454834 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 454834 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 454834' 00:10:28.563 killing process with pid 454834 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 454834 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 454834 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.563 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.112 00:10:31.112 real 0m17.856s 00:10:31.112 user 0m51.657s 00:10:31.112 sys 0m6.507s 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.112 ************************************ 00:10:31.112 END TEST nvmf_nmic 00:10:31.112 ************************************ 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.112 ************************************ 00:10:31.112 START TEST nvmf_fio_target 00:10:31.112 ************************************ 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:31.112 * Looking for test storage... 00:10:31.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.112 --rc genhtml_branch_coverage=1 00:10:31.112 --rc genhtml_function_coverage=1 00:10:31.112 --rc genhtml_legend=1 00:10:31.112 --rc geninfo_all_blocks=1 00:10:31.112 --rc geninfo_unexecuted_blocks=1 00:10:31.112 00:10:31.112 ' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.112 --rc genhtml_branch_coverage=1 00:10:31.112 --rc genhtml_function_coverage=1 00:10:31.112 --rc genhtml_legend=1 00:10:31.112 --rc geninfo_all_blocks=1 00:10:31.112 --rc geninfo_unexecuted_blocks=1 00:10:31.112 00:10:31.112 ' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.112 --rc genhtml_branch_coverage=1 00:10:31.112 --rc genhtml_function_coverage=1 00:10:31.112 --rc genhtml_legend=1 00:10:31.112 --rc geninfo_all_blocks=1 00:10:31.112 --rc geninfo_unexecuted_blocks=1 00:10:31.112 00:10:31.112 ' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.112 --rc genhtml_branch_coverage=1 00:10:31.112 --rc genhtml_function_coverage=1 00:10:31.112 --rc genhtml_legend=1 00:10:31.112 --rc geninfo_all_blocks=1 00:10:31.112 --rc geninfo_unexecuted_blocks=1 00:10:31.112 00:10:31.112 ' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.112 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.113 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.258 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.258 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.258 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:39.259 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:39.259 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:39.259 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:39.259 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:10:39.259 00:10:39.259 --- 10.0.0.2 ping statistics --- 00:10:39.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.259 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:10:39.259 00:10:39.259 --- 10.0.0.1 ping statistics --- 00:10:39.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.259 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.259 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=460796 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 460796 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 460796 ']' 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.260 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.260 [2024-11-20 15:20:27.466121] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:10:39.260 [2024-11-20 15:20:27.466188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.260 [2024-11-20 15:20:27.565633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.260 [2024-11-20 15:20:27.617979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.260 [2024-11-20 15:20:27.618031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.260 [2024-11-20 15:20:27.618040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.260 [2024-11-20 15:20:27.618048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.260 [2024-11-20 15:20:27.618054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.260 [2024-11-20 15:20:27.620482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.260 [2024-11-20 15:20:27.620642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.260 [2024-11-20 15:20:27.620802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.260 [2024-11-20 15:20:27.620802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.522 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:39.783 [2024-11-20 15:20:28.501242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.783 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.043 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:40.043 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.043 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:40.043 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.304 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:40.304 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.564 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:40.564 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:40.825 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.086 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:41.087 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.087 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:41.087 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.348 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:41.348 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:41.609 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:41.609 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:41.609 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.869 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:41.869 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.129 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.129 [2024-11-20 15:20:31.043427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.129 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:42.388 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:42.647 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.614 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:44.614 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:44.614 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.614 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:44.614 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:44.614 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:46.082 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:46.082 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:46.082 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.344 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:46.344 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.344 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:46.344 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:46.344 [global] 00:10:46.344 thread=1 00:10:46.344 invalidate=1 00:10:46.344 rw=write 00:10:46.344 time_based=1 00:10:46.344 runtime=1 00:10:46.344 ioengine=libaio 00:10:46.344 direct=1 00:10:46.344 bs=4096 00:10:46.344 iodepth=1 00:10:46.344 norandommap=0 00:10:46.344 numjobs=1 00:10:46.344 00:10:46.344 verify_dump=1 00:10:46.344 verify_backlog=512 00:10:46.344 verify_state_save=0 00:10:46.344 do_verify=1 00:10:46.344 verify=crc32c-intel 00:10:46.344 [job0] 00:10:46.344 filename=/dev/nvme0n1 00:10:46.344 [job1] 00:10:46.344 filename=/dev/nvme0n2 00:10:46.344 [job2] 00:10:46.344 filename=/dev/nvme0n3 00:10:46.344 [job3] 00:10:46.344 filename=/dev/nvme0n4 00:10:46.344 Could not set queue depth (nvme0n1) 00:10:46.344 Could not set queue depth (nvme0n2) 00:10:46.344 Could not set queue depth (nvme0n3) 00:10:46.344 Could not set queue depth (nvme0n4) 00:10:46.605 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.605 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.605 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.605 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.605 fio-3.35 00:10:46.605 Starting 4 threads 00:10:47.994 00:10:47.994 job0: (groupid=0, jobs=1): err= 0: pid=462653: Wed Nov 20 15:20:36 2024 00:10:47.994 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:47.994 slat (nsec): min=26897, max=45584, avg=27687.07, stdev=2211.08 00:10:47.994 clat (usec): min=499, max=1375, avg=1004.77, stdev=120.45 00:10:47.994 lat (usec): min=527, max=1402, avg=1032.46, stdev=120.41 00:10:47.994 clat percentiles (usec): 00:10:47.994 | 1.00th=[ 660], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 922], 00:10:47.994 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1045], 00:10:47.994 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:10:47.994 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1369], 99.95th=[ 1369], 00:10:47.994 | 99.99th=[ 1369] 00:10:47.994 write: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec); 0 zone resets 00:10:47.994 slat (nsec): min=9601, max=81121, avg=34342.77, stdev=8059.83 00:10:47.994 clat (usec): min=150, max=1119, avg=644.09, stdev=157.57 00:10:47.995 lat (usec): min=161, max=1154, avg=678.43, stdev=159.98 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 265], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 502], 00:10:47.995 | 30.00th=[ 570], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 693], 00:10:47.995 | 70.00th=[ 725], 80.00th=[ 775], 90.00th=[ 840], 95.00th=[ 881], 00:10:47.995 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1123], 99.95th=[ 1123], 00:10:47.995 | 99.99th=[ 1123] 00:10:47.995 bw ( KiB/s): min= 4096, max= 4096, per=35.16%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.995 lat (usec) : 250=0.42%, 500=10.39%, 750=33.61%, 1000=31.43% 00:10:47.995 lat (msec) : 2=24.14% 00:10:47.995 cpu : usr=2.80%, sys=4.80%, ctx=1195, majf=0, minf=1 00:10:47.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 issued rwts: total=512,681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.995 job1: (groupid=0, jobs=1): err= 0: pid=462672: Wed Nov 20 15:20:36 2024 00:10:47.995 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:47.995 slat (nsec): min=7199, max=46778, avg=29185.45, stdev=3338.61 00:10:47.995 clat (usec): min=652, max=1202, avg=990.42, stdev=78.08 00:10:47.995 lat (usec): min=681, max=1231, avg=1019.60, stdev=78.04 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:10:47.995 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1012], 00:10:47.995 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:10:47.995 | 99.00th=[ 1156], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:47.995 | 99.99th=[ 1205] 00:10:47.995 write: IOPS=793, BW=3173KiB/s (3249kB/s)(3176KiB/1001msec); 0 zone resets 00:10:47.995 slat (nsec): min=9232, max=56474, avg=32651.31, stdev=10592.14 00:10:47.995 clat (usec): min=137, max=949, avg=556.33, stdev=136.73 00:10:47.995 lat (usec): min=174, max=985, avg=588.98, stdev=139.76 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 231], 5.00th=[ 306], 10.00th=[ 371], 20.00th=[ 449], 00:10:47.995 | 30.00th=[ 490], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 594], 00:10:47.995 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 766], 00:10:47.995 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 947], 00:10:47.995 | 99.99th=[ 947] 00:10:47.995 bw ( KiB/s): min= 4096, max= 4096, per=35.16%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.995 lat (usec) : 250=0.84%, 500=18.99%, 750=37.52%, 1000=22.59% 00:10:47.995 lat (msec) : 2=20.06% 00:10:47.995 cpu : usr=3.20%, sys=4.90%, ctx=1307, majf=0, minf=1 00:10:47.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 issued rwts: total=512,794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.995 job2: (groupid=0, jobs=1): err= 0: pid=462692: Wed Nov 20 15:20:36 2024 00:10:47.995 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:47.995 slat (nsec): min=7668, max=58633, avg=26670.80, stdev=8480.05 00:10:47.995 clat (usec): min=382, max=41337, avg=1366.33, stdev=4327.61 00:10:47.995 lat (usec): min=415, max=41364, avg=1393.00, stdev=4327.78 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 510], 5.00th=[ 619], 10.00th=[ 685], 20.00th=[ 766], 00:10:47.995 | 30.00th=[ 824], 40.00th=[ 881], 50.00th=[ 914], 60.00th=[ 955], 00:10:47.995 | 70.00th=[ 988], 80.00th=[ 1020], 90.00th=[ 1074], 95.00th=[ 1139], 00:10:47.995 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:47.995 | 99.99th=[41157] 00:10:47.995 write: IOPS=560, BW=2242KiB/s (2296kB/s)(2244KiB/1001msec); 0 zone resets 00:10:47.995 slat (nsec): min=9863, max=53968, avg=31868.49, stdev=9555.95 00:10:47.995 clat (usec): min=160, max=811, avg=464.96, stdev=123.93 00:10:47.995 lat (usec): min=197, max=846, avg=496.83, stdev=125.50 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 196], 5.00th=[ 281], 10.00th=[ 310], 20.00th=[ 351], 00:10:47.995 | 30.00th=[ 375], 40.00th=[ 420], 50.00th=[ 465], 60.00th=[ 502], 00:10:47.995 | 70.00th=[ 545], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 660], 00:10:47.995 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 816], 99.95th=[ 816], 00:10:47.995 | 99.99th=[ 816] 00:10:47.995 bw ( KiB/s): min= 4096, max= 4096, per=35.16%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.995 lat (usec) : 250=1.03%, 500=30.48%, 750=28.52%, 1000=27.96% 00:10:47.995 lat (msec) : 2=11.46%, 50=0.56% 00:10:47.995 cpu : usr=2.50%, sys=3.10%, ctx=1074, majf=0, minf=1 00:10:47.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 issued rwts: total=512,561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.995 job3: (groupid=0, jobs=1): err= 0: pid=462699: Wed Nov 20 15:20:36 2024 00:10:47.995 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:47.995 slat (nsec): min=7222, max=74247, avg=28138.74, stdev=4095.03 00:10:47.995 clat (usec): min=580, max=1184, avg=898.96, stdev=110.44 00:10:47.995 lat (usec): min=608, max=1212, avg=927.10, stdev=110.46 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 635], 5.00th=[ 701], 10.00th=[ 742], 20.00th=[ 799], 00:10:47.995 | 30.00th=[ 848], 40.00th=[ 881], 50.00th=[ 914], 60.00th=[ 938], 00:10:47.995 | 70.00th=[ 971], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:47.995 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:47.995 | 99.99th=[ 1188] 00:10:47.995 write: IOPS=878, BW=3512KiB/s (3597kB/s)(3516KiB/1001msec); 0 zone resets 00:10:47.995 slat (nsec): min=9397, max=72785, avg=33110.04, stdev=10268.90 00:10:47.995 clat (usec): min=177, max=1091, avg=552.79, stdev=138.08 00:10:47.995 lat (usec): min=191, max=1127, avg=585.90, stdev=141.35 00:10:47.995 clat percentiles (usec): 00:10:47.995 | 1.00th=[ 235], 5.00th=[ 318], 10.00th=[ 363], 20.00th=[ 437], 00:10:47.995 | 30.00th=[ 478], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:10:47.995 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 766], 00:10:47.995 | 99.00th=[ 865], 99.50th=[ 938], 99.90th=[ 1090], 99.95th=[ 1090], 00:10:47.995 | 99.99th=[ 1090] 00:10:47.995 bw ( KiB/s): min= 4096, max= 4096, per=35.16%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.995 lat (usec) : 250=0.86%, 500=20.92%, 750=41.62%, 1000=29.19% 00:10:47.995 lat (msec) : 2=7.40% 00:10:47.995 cpu : usr=1.80%, sys=6.80%, ctx=1393, majf=0, minf=1 00:10:47.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.995 issued rwts: total=512,879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.995 00:10:47.995 Run status group 0 (all jobs): 00:10:47.995 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:47.995 WRITE: bw=11.4MiB/s (11.9MB/s), 2242KiB/s-3512KiB/s (2296kB/s-3597kB/s), io=11.4MiB (11.9MB), run=1001-1001msec 00:10:47.995 00:10:47.995 Disk stats (read/write): 00:10:47.995 nvme0n1: ios=485/512, merge=0/0, ticks=1258/247, in_queue=1505, util=83.97% 00:10:47.995 nvme0n2: ios=562/521, merge=0/0, ticks=808/220, in_queue=1028, util=88.15% 00:10:47.995 nvme0n3: ios=395/512, merge=0/0, ticks=1468/226, in_queue=1694, util=92.18% 00:10:47.995 nvme0n4: ios=561/611, merge=0/0, ticks=536/251, in_queue=787, util=96.90% 00:10:47.995 15:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:47.995 [global] 00:10:47.995 thread=1 00:10:47.995 invalidate=1 00:10:47.995 rw=randwrite 00:10:47.995 time_based=1 00:10:47.995 runtime=1 00:10:47.995 ioengine=libaio 00:10:47.995 direct=1 00:10:47.995 bs=4096 00:10:47.995 iodepth=1 00:10:47.995 norandommap=0 00:10:47.995 numjobs=1 00:10:47.995 00:10:47.995 verify_dump=1 00:10:47.995 verify_backlog=512 00:10:47.995 verify_state_save=0 00:10:47.995 do_verify=1 00:10:47.995 verify=crc32c-intel 00:10:47.995 [job0] 00:10:47.995 filename=/dev/nvme0n1 00:10:47.995 [job1] 00:10:47.995 filename=/dev/nvme0n2 00:10:47.995 [job2] 00:10:47.995 filename=/dev/nvme0n3 00:10:47.995 [job3] 00:10:47.995 filename=/dev/nvme0n4 00:10:47.995 Could not set queue depth (nvme0n1) 00:10:47.995 Could not set queue depth (nvme0n2) 00:10:47.995 Could not set queue depth (nvme0n3) 00:10:47.995 Could not set queue depth (nvme0n4) 00:10:48.257 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.257 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.257 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.257 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.257 fio-3.35 00:10:48.257 Starting 4 threads 00:10:49.646 00:10:49.646 job0: (groupid=0, jobs=1): err= 0: pid=463148: Wed Nov 20 15:20:38 2024 00:10:49.646 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1016msec) 00:10:49.646 slat (nsec): min=25967, max=26617, avg=26282.71, stdev=164.99 00:10:49.646 clat (usec): min=1174, max=42099, avg=39504.32, stdev=9879.98 00:10:49.646 lat (usec): min=1201, max=42125, avg=39530.60, stdev=9879.94 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:10:49.646 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:49.646 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:49.646 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.646 | 99.99th=[42206] 00:10:49.646 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:10:49.646 slat (nsec): min=9879, max=61390, avg=32239.65, stdev=7836.79 00:10:49.646 clat (usec): min=189, max=977, avg=630.28, stdev=139.76 00:10:49.646 lat (usec): min=200, max=1010, avg=662.52, stdev=141.35 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 310], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 519], 00:10:49.646 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 676], 00:10:49.646 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 816], 95.00th=[ 857], 00:10:49.646 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:10:49.646 | 99.99th=[ 979] 00:10:49.646 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.646 lat (usec) : 250=0.19%, 500=17.01%, 750=60.11%, 1000=19.47% 00:10:49.646 lat (msec) : 2=0.19%, 50=3.02% 00:10:49.646 cpu : usr=0.49%, sys=1.97%, ctx=532, majf=0, minf=1 00:10:49.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.646 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.646 job1: (groupid=0, jobs=1): err= 0: pid=463161: Wed Nov 20 15:20:38 2024 00:10:49.646 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:49.646 slat (nsec): min=25739, max=61375, avg=27024.36, stdev=3187.57 00:10:49.646 clat (usec): min=705, max=1303, avg=1051.84, stdev=93.42 00:10:49.646 lat (usec): min=731, max=1329, avg=1078.86, stdev=93.43 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 783], 5.00th=[ 857], 10.00th=[ 930], 20.00th=[ 988], 00:10:49.646 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:10:49.646 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:10:49.646 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1303], 00:10:49.646 | 99.99th=[ 1303] 00:10:49.646 write: IOPS=732, BW=2929KiB/s (2999kB/s)(2932KiB/1001msec); 0 zone resets 00:10:49.646 slat (nsec): min=9644, max=68227, avg=29968.89, stdev=9650.46 00:10:49.646 clat (usec): min=234, max=957, avg=566.87, stdev=119.42 00:10:49.646 lat (usec): min=245, max=991, avg=596.84, stdev=124.38 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 289], 5.00th=[ 359], 10.00th=[ 408], 20.00th=[ 469], 00:10:49.646 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 594], 00:10:49.646 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:10:49.646 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 955], 00:10:49.646 | 99.99th=[ 955] 00:10:49.646 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.646 lat (usec) : 250=0.16%, 500=16.79%, 750=38.63%, 1000=13.17% 00:10:49.646 lat (msec) : 2=31.24% 00:10:49.646 cpu : usr=1.90%, sys=3.70%, ctx=1247, majf=0, minf=1 00:10:49.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.646 issued rwts: total=512,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.646 job2: (groupid=0, jobs=1): err= 0: pid=463179: Wed Nov 20 15:20:38 2024 00:10:49.646 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:10:49.646 slat (nsec): min=28335, max=29120, avg=28608.60, stdev=183.23 00:10:49.646 clat (usec): min=954, max=41353, avg=38980.99, stdev=8950.97 00:10:49.646 lat (usec): min=983, max=41382, avg=39009.60, stdev=8950.95 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[40633], 20.00th=[41157], 00:10:49.646 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:49.646 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:49.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:49.646 | 99.99th=[41157] 00:10:49.646 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:49.646 slat (nsec): min=8991, max=55966, avg=30671.36, stdev=10642.71 00:10:49.646 clat (usec): min=150, max=778, avg=408.77, stdev=107.39 00:10:49.646 lat (usec): min=186, max=815, avg=439.44, stdev=109.66 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 210], 5.00th=[ 237], 10.00th=[ 277], 20.00th=[ 314], 00:10:49.646 | 30.00th=[ 338], 40.00th=[ 367], 50.00th=[ 412], 60.00th=[ 441], 00:10:49.646 | 70.00th=[ 469], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 578], 00:10:49.646 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 783], 99.95th=[ 783], 00:10:49.646 | 99.99th=[ 783] 00:10:49.646 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.646 lat (usec) : 250=7.33%, 500=68.80%, 750=19.74%, 1000=0.56% 00:10:49.646 lat (msec) : 50=3.57% 00:10:49.646 cpu : usr=1.68%, sys=1.39%, ctx=534, majf=0, minf=1 00:10:49.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.646 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.646 job3: (groupid=0, jobs=1): err= 0: pid=463186: Wed Nov 20 15:20:38 2024 00:10:49.646 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1018msec) 00:10:49.646 slat (nsec): min=25820, max=26435, avg=26091.56, stdev=183.43 00:10:49.646 clat (usec): min=974, max=42002, avg=39357.86, stdev=9590.44 00:10:49.646 lat (usec): min=1000, max=42027, avg=39383.95, stdev=9590.37 00:10:49.646 clat percentiles (usec): 00:10:49.646 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[41157], 20.00th=[41157], 00:10:49.646 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:49.646 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:49.646 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.646 | 99.99th=[42206] 00:10:49.646 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:49.647 slat (nsec): min=9783, max=52979, avg=32298.03, stdev=7278.97 00:10:49.647 clat (usec): min=151, max=909, avg=561.94, stdev=151.43 00:10:49.647 lat (usec): min=162, max=941, avg=594.23, stdev=152.72 00:10:49.647 clat percentiles (usec): 00:10:49.647 | 1.00th=[ 239], 5.00th=[ 302], 10.00th=[ 351], 20.00th=[ 412], 00:10:49.647 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 578], 60.00th=[ 619], 00:10:49.647 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 783], 00:10:49.647 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 914], 00:10:49.647 | 99.99th=[ 914] 00:10:49.647 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.647 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.647 lat (usec) : 250=1.13%, 500=31.32%, 750=53.40%, 1000=10.94% 00:10:49.647 lat (msec) : 50=3.21% 00:10:49.647 cpu : usr=0.79%, sys=1.57%, ctx=532, majf=0, minf=2 00:10:49.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.647 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.647 00:10:49.647 Run status group 0 (all jobs): 00:10:49.647 READ: bw=2228KiB/s (2281kB/s), 66.9KiB/s-2046KiB/s (68.5kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1018msec 00:10:49.647 WRITE: bw=8916KiB/s (9129kB/s), 2012KiB/s-2929KiB/s (2060kB/s-2999kB/s), io=9076KiB (9294kB), run=1001-1018msec 00:10:49.647 00:10:49.647 Disk stats (read/write): 00:10:49.647 nvme0n1: ios=58/512, merge=0/0, ticks=511/304, in_queue=815, util=87.07% 00:10:49.647 nvme0n2: ios=507/512, merge=0/0, ticks=1401/279, in_queue=1680, util=88.38% 00:10:49.647 nvme0n3: ios=73/512, merge=0/0, ticks=846/168, in_queue=1014, util=92.73% 00:10:49.647 nvme0n4: ios=37/512, merge=0/0, ticks=1404/271, in_queue=1675, util=94.13% 00:10:49.647 15:20:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:49.647 [global] 00:10:49.647 thread=1 00:10:49.647 invalidate=1 00:10:49.647 rw=write 00:10:49.647 time_based=1 00:10:49.647 runtime=1 00:10:49.647 ioengine=libaio 00:10:49.647 direct=1 00:10:49.647 bs=4096 00:10:49.647 iodepth=128 00:10:49.647 norandommap=0 00:10:49.647 numjobs=1 00:10:49.647 00:10:49.647 verify_dump=1 00:10:49.647 verify_backlog=512 00:10:49.647 verify_state_save=0 00:10:49.647 do_verify=1 00:10:49.647 verify=crc32c-intel 00:10:49.647 [job0] 00:10:49.647 filename=/dev/nvme0n1 00:10:49.647 [job1] 00:10:49.647 filename=/dev/nvme0n2 00:10:49.647 [job2] 00:10:49.647 filename=/dev/nvme0n3 00:10:49.647 [job3] 00:10:49.647 filename=/dev/nvme0n4 00:10:49.647 Could not set queue depth (nvme0n1) 00:10:49.647 Could not set queue depth (nvme0n2) 00:10:49.647 Could not set queue depth (nvme0n3) 00:10:49.647 Could not set queue depth (nvme0n4) 00:10:49.907 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.907 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.907 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.907 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.907 fio-3.35 00:10:49.907 Starting 4 threads 00:10:51.287 00:10:51.287 job0: (groupid=0, jobs=1): err= 0: pid=463644: Wed Nov 20 15:20:40 2024 00:10:51.287 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:10:51.287 slat (nsec): min=927, max=18373k, avg=124500.30, stdev=831678.25 00:10:51.287 clat (usec): min=6145, max=52816, avg=14980.78, stdev=9346.61 00:10:51.287 lat (usec): min=6149, max=59279, avg=15105.29, stdev=9434.38 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:10:51.287 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[11863], 00:10:51.287 | 70.00th=[15401], 80.00th=[23987], 90.00th=[28967], 95.00th=[34866], 00:10:51.287 | 99.00th=[46400], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:10:51.287 | 99.99th=[52691] 00:10:51.287 write: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1008msec); 0 zone resets 00:10:51.287 slat (nsec): min=1641, max=23063k, avg=144606.55, stdev=789618.64 00:10:51.287 clat (usec): min=732, max=64283, avg=20046.68, stdev=13885.07 00:10:51.287 lat (usec): min=742, max=64292, avg=20191.29, stdev=13960.15 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 4621], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[11076], 00:10:51.287 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13435], 60.00th=[15926], 00:10:51.287 | 70.00th=[22152], 80.00th=[29230], 90.00th=[44827], 95.00th=[53740], 00:10:51.287 | 99.00th=[58459], 99.50th=[60556], 99.90th=[64226], 99.95th=[64226], 00:10:51.287 | 99.99th=[64226] 00:10:51.287 bw ( KiB/s): min=12168, max=16504, per=17.13%, avg=14336.00, stdev=3066.02, samples=2 00:10:51.287 iops : min= 3042, max= 4126, avg=3584.00, stdev=766.50, samples=2 00:10:51.287 lat (usec) : 750=0.03%, 1000=0.01% 00:10:51.287 lat (msec) : 2=0.10%, 4=0.01%, 10=32.54%, 20=40.21%, 50=23.44% 00:10:51.287 lat (msec) : 100=3.66% 00:10:51.287 cpu : usr=2.48%, sys=3.97%, ctx=465, majf=0, minf=1 00:10:51.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:51.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.287 issued rwts: total=3584,3687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.287 job1: (groupid=0, jobs=1): err= 0: pid=463657: Wed Nov 20 15:20:40 2024 00:10:51.287 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1008msec) 00:10:51.287 slat (nsec): min=881, max=14643k, avg=82857.67, stdev=691797.10 00:10:51.287 clat (usec): min=1342, max=33069, avg=11003.46, stdev=5008.42 00:10:51.287 lat (usec): min=1369, max=33093, avg=11086.32, stdev=5073.04 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 2114], 5.00th=[ 3982], 10.00th=[ 5473], 20.00th=[ 6652], 00:10:51.287 | 30.00th=[ 7111], 40.00th=[ 8356], 50.00th=[10814], 60.00th=[12256], 00:10:51.287 | 70.00th=[14222], 80.00th=[15270], 90.00th=[17433], 95.00th=[19006], 00:10:51.287 | 99.00th=[22152], 99.50th=[26346], 99.90th=[32375], 99.95th=[32375], 00:10:51.287 | 99.99th=[33162] 00:10:51.287 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:10:51.287 slat (nsec): min=1721, max=11501k, avg=120071.32, stdev=699124.57 00:10:51.287 clat (usec): min=797, max=64087, avg=16772.72, stdev=17908.74 00:10:51.287 lat (usec): min=805, max=64095, avg=16892.79, stdev=18037.86 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 1319], 5.00th=[ 3982], 10.00th=[ 4424], 20.00th=[ 5932], 00:10:51.287 | 30.00th=[ 6980], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:10:51.287 | 70.00th=[12256], 80.00th=[23462], 90.00th=[54264], 95.00th=[57410], 00:10:51.287 | 99.00th=[62129], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:10:51.287 | 99.99th=[64226] 00:10:51.287 bw ( KiB/s): min=16384, max=20480, per=22.02%, avg=18432.00, stdev=2896.31, samples=2 00:10:51.287 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:51.287 lat (usec) : 1000=0.07% 00:10:51.287 lat (msec) : 2=1.27%, 4=4.43%, 10=50.12%, 20=31.47%, 50=5.89% 00:10:51.287 lat (msec) : 100=6.75% 00:10:51.287 cpu : usr=3.77%, sys=5.56%, ctx=297, majf=0, minf=2 00:10:51.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:51.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.287 issued rwts: total=4594,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.287 job2: (groupid=0, jobs=1): err= 0: pid=463677: Wed Nov 20 15:20:40 2024 00:10:51.287 read: IOPS=6221, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1008msec) 00:10:51.287 slat (nsec): min=1042, max=10372k, avg=78651.27, stdev=576756.68 00:10:51.287 clat (usec): min=3568, max=56474, avg=10404.59, stdev=5388.42 00:10:51.287 lat (usec): min=3573, max=56482, avg=10483.24, stdev=5441.25 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7111], 00:10:51.287 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[ 9634], 00:10:51.287 | 70.00th=[11207], 80.00th=[13173], 90.00th=[16450], 95.00th=[17171], 00:10:51.287 | 99.00th=[34341], 99.50th=[49021], 99.90th=[55837], 99.95th=[56361], 00:10:51.287 | 99.99th=[56361] 00:10:51.287 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:10:51.287 slat (nsec): min=1752, max=8391.6k, avg=70234.02, stdev=491382.70 00:10:51.287 clat (usec): min=1673, max=56468, avg=9370.57, stdev=7101.28 00:10:51.287 lat (usec): min=1682, max=56479, avg=9440.80, stdev=7138.73 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 5997], 00:10:51.287 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7439], 00:10:51.287 | 70.00th=[ 9241], 80.00th=[11469], 90.00th=[14091], 95.00th=[26608], 00:10:51.287 | 99.00th=[40633], 99.50th=[46924], 99.90th=[51119], 99.95th=[51119], 00:10:51.287 | 99.99th=[56361] 00:10:51.287 bw ( KiB/s): min=24568, max=28672, per=31.80%, avg=26620.00, stdev=2901.97, samples=2 00:10:51.287 iops : min= 6142, max= 7168, avg=6655.00, stdev=725.49, samples=2 00:10:51.287 lat (msec) : 2=0.05%, 4=1.46%, 10=68.13%, 20=26.27%, 50=3.74% 00:10:51.287 lat (msec) : 100=0.35% 00:10:51.287 cpu : usr=5.66%, sys=8.04%, ctx=323, majf=0, minf=1 00:10:51.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:51.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.287 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.287 job3: (groupid=0, jobs=1): err= 0: pid=463685: Wed Nov 20 15:20:40 2024 00:10:51.287 read: IOPS=6087, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1006msec) 00:10:51.287 slat (nsec): min=1015, max=9958.4k, avg=76637.05, stdev=530923.88 00:10:51.287 clat (usec): min=2720, max=33181, avg=9554.29, stdev=3309.02 00:10:51.287 lat (usec): min=3459, max=33185, avg=9630.93, stdev=3350.09 00:10:51.287 clat percentiles (usec): 00:10:51.287 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7242], 00:10:51.288 | 30.00th=[ 7570], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[ 9634], 00:10:51.288 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12780], 95.00th=[15401], 00:10:51.288 | 99.00th=[22938], 99.50th=[26346], 99.90th=[30016], 99.95th=[33162], 00:10:51.288 | 99.99th=[33162] 00:10:51.288 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:10:51.288 slat (nsec): min=1733, max=14312k, avg=80943.92, stdev=458962.19 00:10:51.288 clat (usec): min=2387, max=33188, avg=11186.33, stdev=5584.17 00:10:51.288 lat (usec): min=2395, max=33192, avg=11267.27, stdev=5622.68 00:10:51.288 clat percentiles (usec): 00:10:51.288 | 1.00th=[ 3556], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 6259], 00:10:51.288 | 30.00th=[ 6980], 40.00th=[ 8029], 50.00th=[10814], 60.00th=[12518], 00:10:51.288 | 70.00th=[13304], 80.00th=[15926], 90.00th=[19792], 95.00th=[21365], 00:10:51.288 | 99.00th=[27132], 99.50th=[28705], 99.90th=[30016], 99.95th=[30016], 00:10:51.288 | 99.99th=[33162] 00:10:51.288 bw ( KiB/s): min=21936, max=27216, per=29.36%, avg=24576.00, stdev=3733.52, samples=2 00:10:51.288 iops : min= 5484, max= 6804, avg=6144.00, stdev=933.38, samples=2 00:10:51.288 lat (msec) : 4=1.60%, 10=55.33%, 20=37.75%, 50=5.32% 00:10:51.288 cpu : usr=4.08%, sys=7.36%, ctx=533, majf=0, minf=1 00:10:51.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:51.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.288 issued rwts: total=6124,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.288 00:10:51.288 Run status group 0 (all jobs): 00:10:51.288 READ: bw=79.7MiB/s (83.6MB/s), 13.9MiB/s-24.3MiB/s (14.6MB/s-25.5MB/s), io=80.4MiB (84.3MB), run=1006-1008msec 00:10:51.288 WRITE: bw=81.7MiB/s (85.7MB/s), 14.3MiB/s-25.8MiB/s (15.0MB/s-27.0MB/s), io=82.4MiB (86.4MB), run=1006-1008msec 00:10:51.288 00:10:51.288 Disk stats (read/write): 00:10:51.288 nvme0n1: ios=3121/3095, merge=0/0, ticks=24181/28164, in_queue=52345, util=84.17% 00:10:51.288 nvme0n2: ios=3261/3584, merge=0/0, ticks=32225/66705, in_queue=98930, util=90.83% 00:10:51.288 nvme0n3: ios=5693/5991, merge=0/0, ticks=50757/48565, in_queue=99322, util=92.83% 00:10:51.288 nvme0n4: ios=4826/5120, merge=0/0, ticks=45253/56714, in_queue=101967, util=94.02% 00:10:51.288 15:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:51.288 [global] 00:10:51.288 thread=1 00:10:51.288 invalidate=1 00:10:51.288 rw=randwrite 00:10:51.288 time_based=1 00:10:51.288 runtime=1 00:10:51.288 ioengine=libaio 00:10:51.288 direct=1 00:10:51.288 bs=4096 00:10:51.288 iodepth=128 00:10:51.288 norandommap=0 00:10:51.288 numjobs=1 00:10:51.288 00:10:51.288 verify_dump=1 00:10:51.288 verify_backlog=512 00:10:51.288 verify_state_save=0 00:10:51.288 do_verify=1 00:10:51.288 verify=crc32c-intel 00:10:51.288 [job0] 00:10:51.288 filename=/dev/nvme0n1 00:10:51.288 [job1] 00:10:51.288 filename=/dev/nvme0n2 00:10:51.288 [job2] 00:10:51.288 filename=/dev/nvme0n3 00:10:51.288 [job3] 00:10:51.288 filename=/dev/nvme0n4 00:10:51.288 Could not set queue depth (nvme0n1) 00:10:51.288 Could not set queue depth (nvme0n2) 00:10:51.288 Could not set queue depth (nvme0n3) 00:10:51.288 Could not set queue depth (nvme0n4) 00:10:51.548 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.548 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.548 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.548 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.548 fio-3.35 00:10:51.548 Starting 4 threads 00:10:52.931 00:10:52.931 job0: (groupid=0, jobs=1): err= 0: pid=464112: Wed Nov 20 15:20:41 2024 00:10:52.931 read: IOPS=5590, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:52.931 slat (nsec): min=903, max=13213k, avg=86988.31, stdev=622034.99 00:10:52.931 clat (usec): min=2663, max=32394, avg=10946.06, stdev=4077.12 00:10:52.931 lat (usec): min=3301, max=32402, avg=11033.05, stdev=4132.78 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6915], 00:10:52.931 | 30.00th=[ 7373], 40.00th=[ 9110], 50.00th=[10945], 60.00th=[12125], 00:10:52.931 | 70.00th=[13042], 80.00th=[14615], 90.00th=[16188], 95.00th=[17171], 00:10:52.931 | 99.00th=[22414], 99.50th=[23462], 99.90th=[28967], 99.95th=[32375], 00:10:52.931 | 99.99th=[32375] 00:10:52.931 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:52.931 slat (nsec): min=1570, max=9627.2k, avg=84883.20, stdev=459279.77 00:10:52.931 clat (usec): min=1147, max=51642, avg=11720.16, stdev=8018.94 00:10:52.931 lat (usec): min=1157, max=51647, avg=11805.05, stdev=8073.94 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 3261], 5.00th=[ 4490], 10.00th=[ 6652], 20.00th=[ 7111], 00:10:52.931 | 30.00th=[ 7832], 40.00th=[ 9110], 50.00th=[10159], 60.00th=[11338], 00:10:52.931 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15795], 95.00th=[27919], 00:10:52.931 | 99.00th=[47973], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:10:52.931 | 99.99th=[51643] 00:10:52.931 bw ( KiB/s): min=22211, max=22800, per=24.74%, avg=22505.50, stdev=416.49, samples=2 00:10:52.931 iops : min= 5552, max= 5700, avg=5626.00, stdev=104.65, samples=2 00:10:52.931 lat (msec) : 2=0.08%, 4=1.04%, 10=46.66%, 20=48.47%, 50=3.62% 00:10:52.931 lat (msec) : 100=0.12% 00:10:52.931 cpu : usr=4.28%, sys=5.57%, ctx=579, majf=0, minf=2 00:10:52.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:52.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.931 issued rwts: total=5624,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.931 job1: (groupid=0, jobs=1): err= 0: pid=464140: Wed Nov 20 15:20:41 2024 00:10:52.931 read: IOPS=5152, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1002msec) 00:10:52.931 slat (nsec): min=899, max=44115k, avg=101950.46, stdev=840230.96 00:10:52.931 clat (usec): min=1002, max=52328, avg=13254.21, stdev=6558.81 00:10:52.931 lat (usec): min=1900, max=52337, avg=13356.17, stdev=6579.70 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 4817], 5.00th=[ 6390], 10.00th=[ 7963], 20.00th=[ 9503], 00:10:52.931 | 30.00th=[10945], 40.00th=[11863], 50.00th=[12780], 60.00th=[13435], 00:10:52.931 | 70.00th=[14615], 80.00th=[15139], 90.00th=[16319], 95.00th=[19268], 00:10:52.931 | 99.00th=[49546], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:10:52.931 | 99.99th=[52167] 00:10:52.931 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:52.931 slat (nsec): min=1555, max=12130k, avg=78050.23, stdev=421739.30 00:10:52.931 clat (usec): min=751, max=24351, avg=10423.20, stdev=3375.01 00:10:52.931 lat (usec): min=1754, max=24378, avg=10501.25, stdev=3401.21 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 4555], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6783], 00:10:52.931 | 30.00th=[ 8291], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11469], 00:10:52.931 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14484], 95.00th=[15926], 00:10:52.931 | 99.00th=[20317], 99.50th=[21890], 99.90th=[23725], 99.95th=[23725], 00:10:52.931 | 99.99th=[24249] 00:10:52.931 bw ( KiB/s): min=19808, max=24526, per=24.36%, avg=22167.00, stdev=3336.13, samples=2 00:10:52.931 iops : min= 4952, max= 6131, avg=5541.50, stdev=833.68, samples=2 00:10:52.931 lat (usec) : 1000=0.01% 00:10:52.931 lat (msec) : 2=0.19%, 4=0.31%, 10=34.03%, 20=63.28%, 50=1.79% 00:10:52.931 lat (msec) : 100=0.40% 00:10:52.931 cpu : usr=3.50%, sys=5.19%, ctx=483, majf=0, minf=2 00:10:52.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:52.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.931 issued rwts: total=5163,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.931 job2: (groupid=0, jobs=1): err= 0: pid=464163: Wed Nov 20 15:20:41 2024 00:10:52.931 read: IOPS=6377, BW=24.9MiB/s (26.1MB/s)(25.1MiB/1006msec) 00:10:52.931 slat (nsec): min=962, max=10761k, avg=78195.07, stdev=581297.70 00:10:52.931 clat (usec): min=3277, max=26821, avg=10821.94, stdev=4674.35 00:10:52.931 lat (usec): min=3287, max=26824, avg=10900.13, stdev=4707.04 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 3556], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6980], 00:10:52.931 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10421], 00:10:52.931 | 70.00th=[11863], 80.00th=[14877], 90.00th=[18482], 95.00th=[20841], 00:10:52.931 | 99.00th=[24511], 99.50th=[25035], 99.90th=[26084], 99.95th=[26084], 00:10:52.931 | 99.99th=[26870] 00:10:52.931 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:10:52.931 slat (nsec): min=1620, max=9797.3k, avg=63479.50, stdev=432572.71 00:10:52.931 clat (usec): min=556, max=31981, avg=8681.12, stdev=4410.20 00:10:52.931 lat (usec): min=591, max=31984, avg=8744.60, stdev=4442.21 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 1647], 5.00th=[ 3523], 10.00th=[ 4293], 20.00th=[ 5866], 00:10:52.931 | 30.00th=[ 6521], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8291], 00:10:52.931 | 70.00th=[ 9765], 80.00th=[11994], 90.00th=[12911], 95.00th=[15926], 00:10:52.931 | 99.00th=[27919], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:10:52.931 | 99.99th=[31851] 00:10:52.931 bw ( KiB/s): min=24928, max=28263, per=29.23%, avg=26595.50, stdev=2358.20, samples=2 00:10:52.931 iops : min= 6232, max= 7065, avg=6648.50, stdev=589.02, samples=2 00:10:52.931 lat (usec) : 750=0.02%, 1000=0.05% 00:10:52.931 lat (msec) : 2=0.75%, 4=3.43%, 10=59.34%, 20=31.70%, 50=4.70% 00:10:52.931 cpu : usr=5.17%, sys=6.87%, ctx=483, majf=0, minf=1 00:10:52.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:52.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.931 issued rwts: total=6416,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.931 job3: (groupid=0, jobs=1): err= 0: pid=464176: Wed Nov 20 15:20:41 2024 00:10:52.931 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:52.931 slat (nsec): min=967, max=7535.9k, avg=110964.75, stdev=598715.66 00:10:52.931 clat (usec): min=8436, max=29931, avg=14101.48, stdev=3866.09 00:10:52.931 lat (usec): min=8442, max=29937, avg=14212.44, stdev=3893.04 00:10:52.931 clat percentiles (usec): 00:10:52.931 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[10945], 20.00th=[11469], 00:10:52.931 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:10:52.931 | 70.00th=[14484], 80.00th=[16188], 90.00th=[19530], 95.00th=[21365], 00:10:52.932 | 99.00th=[28443], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:10:52.932 | 99.99th=[30016] 00:10:52.932 write: IOPS=4947, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1003msec); 0 zone resets 00:10:52.932 slat (nsec): min=1590, max=9550.8k, avg=93715.19, stdev=406507.45 00:10:52.932 clat (usec): min=2171, max=19680, avg=12391.91, stdev=2580.62 00:10:52.932 lat (usec): min=2542, max=19684, avg=12485.63, stdev=2586.56 00:10:52.932 clat percentiles (usec): 00:10:52.932 | 1.00th=[ 5014], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10421], 00:10:52.932 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:10:52.932 | 70.00th=[13435], 80.00th=[14615], 90.00th=[15795], 95.00th=[16712], 00:10:52.932 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 00:10:52.932 | 99.99th=[19792] 00:10:52.932 bw ( KiB/s): min=18624, max=20015, per=21.23%, avg=19319.50, stdev=983.59, samples=2 00:10:52.932 iops : min= 4656, max= 5003, avg=4829.50, stdev=245.37, samples=2 00:10:52.932 lat (msec) : 4=0.34%, 10=9.55%, 20=86.06%, 50=4.04% 00:10:52.932 cpu : usr=3.29%, sys=4.09%, ctx=619, majf=0, minf=2 00:10:52.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:52.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.932 issued rwts: total=4608,4962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.932 00:10:52.932 Run status group 0 (all jobs): 00:10:52.932 READ: bw=84.7MiB/s (88.8MB/s), 17.9MiB/s-24.9MiB/s (18.8MB/s-26.1MB/s), io=85.2MiB (89.3MB), run=1002-1006msec 00:10:52.932 WRITE: bw=88.8MiB/s (93.2MB/s), 19.3MiB/s-25.8MiB/s (20.3MB/s-27.1MB/s), io=89.4MiB (93.7MB), run=1002-1006msec 00:10:52.932 00:10:52.932 Disk stats (read/write): 00:10:52.932 nvme0n1: ios=4146/4318, merge=0/0, ticks=35395/38957, in_queue=74352, util=81.06% 00:10:52.932 nvme0n2: ios=4096/4271, merge=0/0, ticks=23512/19649, in_queue=43161, util=79.77% 00:10:52.932 nvme0n3: ios=5141/5630, merge=0/0, ticks=49551/43811, in_queue=93362, util=98.66% 00:10:52.932 nvme0n4: ios=3642/3631, merge=0/0, ticks=16170/13902, in_queue=30072, util=98.97% 00:10:52.932 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:52.932 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=464311 00:10:52.932 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:52.932 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:52.932 [global] 00:10:52.932 thread=1 00:10:52.932 invalidate=1 00:10:52.932 rw=read 00:10:52.932 time_based=1 00:10:52.932 runtime=10 00:10:52.932 ioengine=libaio 00:10:52.932 direct=1 00:10:52.932 bs=4096 00:10:52.932 iodepth=1 00:10:52.932 norandommap=1 00:10:52.932 numjobs=1 00:10:52.932 00:10:52.932 [job0] 00:10:52.932 filename=/dev/nvme0n1 00:10:52.932 [job1] 00:10:52.932 filename=/dev/nvme0n2 00:10:52.932 [job2] 00:10:52.932 filename=/dev/nvme0n3 00:10:52.932 [job3] 00:10:52.932 filename=/dev/nvme0n4 00:10:53.213 Could not set queue depth (nvme0n1) 00:10:53.213 Could not set queue depth (nvme0n2) 00:10:53.213 Could not set queue depth (nvme0n3) 00:10:53.213 Could not set queue depth (nvme0n4) 00:10:53.473 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.474 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.474 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.474 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.474 fio-3.35 00:10:53.474 Starting 4 threads 00:10:56.019 15:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:56.019 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=262144, buflen=4096 00:10:56.019 fio: pid=464677, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.019 15:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:56.280 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.280 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:56.280 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=352256, buflen=4096 00:10:56.280 fio: pid=464676, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.541 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.541 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:56.541 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:10:56.541 fio: pid=464641, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.802 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11329536, buflen=4096 00:10:56.802 fio: pid=464660, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.802 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.803 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:56.803 00:10:56.803 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=464641: Wed Nov 20 15:20:45 2024 00:10:56.803 read: IOPS=24, BW=95.0KiB/s (97.3kB/s)(284KiB/2990msec) 00:10:56.803 slat (usec): min=19, max=19876, avg=537.20, stdev=3054.16 00:10:56.803 clat (usec): min=1016, max=45594, avg=41254.28, stdev=4882.04 00:10:56.803 lat (usec): min=1104, max=61031, avg=41560.06, stdev=5408.90 00:10:56.803 clat percentiles (usec): 00:10:56.803 | 1.00th=[ 1020], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:56.803 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:56.803 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:56.803 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:56.803 | 99.99th=[45351] 00:10:56.803 bw ( KiB/s): min= 95, max= 96, per=2.49%, avg=95.80, stdev= 0.45, samples=5 00:10:56.803 iops : min= 23, max= 24, avg=23.80, stdev= 0.45, samples=5 00:10:56.803 lat (msec) : 2=1.39%, 50=97.22% 00:10:56.803 cpu : usr=0.00%, sys=0.10%, ctx=75, majf=0, minf=1 00:10:56.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.803 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=464660: Wed Nov 20 15:20:45 2024 00:10:56.803 read: IOPS=882, BW=3528KiB/s (3613kB/s)(10.8MiB/3136msec) 00:10:56.803 slat (usec): min=5, max=29409, avg=67.71, stdev=943.23 00:10:56.803 clat (usec): min=300, max=41313, avg=1051.57, stdev=780.17 00:10:56.803 lat (usec): min=307, max=41320, avg=1119.30, stdev=1217.22 00:10:56.803 clat percentiles (usec): 00:10:56.803 | 1.00th=[ 553], 5.00th=[ 701], 10.00th=[ 758], 20.00th=[ 979], 00:10:56.803 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:56.803 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1156], 95.00th=[ 1188], 00:10:56.803 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1336], 99.95th=[ 1598], 00:10:56.803 | 99.99th=[41157] 00:10:56.803 bw ( KiB/s): min= 3560, max= 3632, per=94.10%, avg=3585.33, stdev=30.95, samples=6 00:10:56.803 iops : min= 890, max= 908, avg=896.33, stdev= 7.74, samples=6 00:10:56.803 lat (usec) : 500=0.58%, 750=8.53%, 1000=13.91% 00:10:56.803 lat (msec) : 2=76.91%, 50=0.04% 00:10:56.803 cpu : usr=0.80%, sys=2.78%, ctx=2773, majf=0, minf=2 00:10:56.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 issued rwts: total=2767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.803 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=464676: Wed Nov 20 15:20:45 2024 00:10:56.803 read: IOPS=30, BW=122KiB/s (125kB/s)(344KiB/2811msec) 00:10:56.803 slat (usec): min=6, max=6592, avg=99.31, stdev=704.31 00:10:56.803 clat (usec): min=445, max=42035, avg=32329.66, stdev=16902.51 00:10:56.803 lat (usec): min=475, max=47902, avg=32429.77, stdev=16963.89 00:10:56.803 clat percentiles (usec): 00:10:56.803 | 1.00th=[ 445], 5.00th=[ 519], 10.00th=[ 578], 20.00th=[ 848], 00:10:56.803 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:56.803 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:10:56.803 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:56.803 | 99.99th=[42206] 00:10:56.803 bw ( KiB/s): min= 96, max= 104, per=2.60%, avg=99.20, stdev= 4.38, samples=5 00:10:56.803 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:56.803 lat (usec) : 500=2.30%, 750=16.09%, 1000=2.30% 00:10:56.803 lat (msec) : 4=1.15%, 50=77.01% 00:10:56.803 cpu : usr=0.14%, sys=0.00%, ctx=88, majf=0, minf=2 00:10:56.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.803 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=464677: Wed Nov 20 15:20:45 2024 00:10:56.803 read: IOPS=24, BW=98.3KiB/s (101kB/s)(256KiB/2605msec) 00:10:56.803 slat (nsec): min=22226, max=61540, avg=26944.18, stdev=4394.19 00:10:56.803 clat (usec): min=708, max=41104, avg=40333.74, stdev=5032.10 00:10:56.803 lat (usec): min=770, max=41130, avg=40360.68, stdev=5027.71 00:10:56.803 clat percentiles (usec): 00:10:56.803 | 1.00th=[ 709], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:56.803 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:56.803 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:56.803 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:56.803 | 99.99th=[41157] 00:10:56.803 bw ( KiB/s): min= 96, max= 104, per=2.60%, avg=99.00, stdev= 4.12, samples=5 00:10:56.803 iops : min= 24, max= 26, avg=24.60, stdev= 0.89, samples=5 00:10:56.803 lat (usec) : 750=1.54% 00:10:56.803 lat (msec) : 50=96.92% 00:10:56.803 cpu : usr=0.00%, sys=0.12%, ctx=65, majf=0, minf=2 00:10:56.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.803 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.803 00:10:56.803 Run status group 0 (all jobs): 00:10:56.803 READ: bw=3810KiB/s (3901kB/s), 95.0KiB/s-3528KiB/s (97.3kB/s-3613kB/s), io=11.7MiB (12.2MB), run=2605-3136msec 00:10:56.803 00:10:56.803 Disk stats (read/write): 00:10:56.803 nvme0n1: ios=68/0, merge=0/0, ticks=2801/0, in_queue=2801, util=94.12% 00:10:56.803 nvme0n2: ios=2764/0, merge=0/0, ticks=2842/0, in_queue=2842, util=92.81% 00:10:56.803 nvme0n3: ios=64/0, merge=0/0, ticks=2563/0, in_queue=2563, util=95.96% 00:10:56.803 nvme0n4: ios=64/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.39% 00:10:56.803 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.803 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:57.064 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.064 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:57.325 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.325 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:57.325 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.325 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:57.585 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:57.585 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 464311 00:10:57.585 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:57.585 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:57.586 nvmf hotplug test: fio failed as expected 00:10:57.586 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.846 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.846 rmmod nvme_tcp 00:10:57.846 rmmod nvme_fabrics 00:10:57.846 rmmod nvme_keyring 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 460796 ']' 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 460796 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 460796 ']' 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 460796 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 460796 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 460796' 00:10:58.107 killing process with pid 460796 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 460796 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 460796 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.107 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.107 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.107 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.107 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.107 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.107 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.655 00:11:00.655 real 0m29.442s 00:11:00.655 user 2m41.939s 00:11:00.655 sys 0m9.515s 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.655 ************************************ 00:11:00.655 END TEST nvmf_fio_target 00:11:00.655 ************************************ 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.655 ************************************ 00:11:00.655 START TEST nvmf_bdevio 00:11:00.655 ************************************ 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:00.655 * Looking for test storage... 00:11:00.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.655 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.656 --rc genhtml_branch_coverage=1 00:11:00.656 --rc genhtml_function_coverage=1 00:11:00.656 --rc genhtml_legend=1 00:11:00.656 --rc geninfo_all_blocks=1 00:11:00.656 --rc geninfo_unexecuted_blocks=1 00:11:00.656 00:11:00.656 ' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.656 --rc genhtml_branch_coverage=1 00:11:00.656 --rc genhtml_function_coverage=1 00:11:00.656 --rc genhtml_legend=1 00:11:00.656 --rc geninfo_all_blocks=1 00:11:00.656 --rc geninfo_unexecuted_blocks=1 00:11:00.656 00:11:00.656 ' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.656 --rc genhtml_branch_coverage=1 00:11:00.656 --rc genhtml_function_coverage=1 00:11:00.656 --rc genhtml_legend=1 00:11:00.656 --rc geninfo_all_blocks=1 00:11:00.656 --rc geninfo_unexecuted_blocks=1 00:11:00.656 00:11:00.656 ' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.656 --rc genhtml_branch_coverage=1 00:11:00.656 --rc genhtml_function_coverage=1 00:11:00.656 --rc genhtml_legend=1 00:11:00.656 --rc geninfo_all_blocks=1 00:11:00.656 --rc geninfo_unexecuted_blocks=1 00:11:00.656 00:11:00.656 ' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.656 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.657 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.800 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:08.801 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:08.801 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:08.801 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:08.801 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:11:08.801 00:11:08.801 --- 10.0.0.2 ping statistics --- 00:11:08.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.801 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:11:08.801 00:11:08.801 --- 10.0.0.1 ping statistics --- 00:11:08.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.801 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.801 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=469859 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 469859 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 469859 ']' 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.802 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.802 [2024-11-20 15:20:56.926266] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:11:08.802 [2024-11-20 15:20:56.926320] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.802 [2024-11-20 15:20:57.020693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.802 [2024-11-20 15:20:57.065830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.802 [2024-11-20 15:20:57.065878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.802 [2024-11-20 15:20:57.065887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.802 [2024-11-20 15:20:57.065894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.802 [2024-11-20 15:20:57.065900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.802 [2024-11-20 15:20:57.067822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.802 [2024-11-20 15:20:57.067979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:08.802 [2024-11-20 15:20:57.068134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:08.802 [2024-11-20 15:20:57.068135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.802 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.802 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:08.802 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.802 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.802 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.063 [2024-11-20 15:20:57.780300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.063 Malloc0 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.063 [2024-11-20 15:20:57.862250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.063 { 00:11:09.063 "params": { 00:11:09.063 "name": "Nvme$subsystem", 00:11:09.063 "trtype": "$TEST_TRANSPORT", 00:11:09.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.063 "adrfam": "ipv4", 00:11:09.063 "trsvcid": "$NVMF_PORT", 00:11:09.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.063 "hdgst": ${hdgst:-false}, 00:11:09.063 "ddgst": ${ddgst:-false} 00:11:09.063 }, 00:11:09.063 "method": "bdev_nvme_attach_controller" 00:11:09.063 } 00:11:09.063 EOF 00:11:09.063 )") 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:09.063 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.063 "params": { 00:11:09.063 "name": "Nvme1", 00:11:09.063 "trtype": "tcp", 00:11:09.063 "traddr": "10.0.0.2", 00:11:09.063 "adrfam": "ipv4", 00:11:09.063 "trsvcid": "4420", 00:11:09.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.063 "hdgst": false, 00:11:09.063 "ddgst": false 00:11:09.063 }, 00:11:09.063 "method": "bdev_nvme_attach_controller" 00:11:09.063 }' 00:11:09.063 [2024-11-20 15:20:57.921247] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:11:09.063 [2024-11-20 15:20:57.921312] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470018 ] 00:11:09.063 [2024-11-20 15:20:58.013342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.325 [2024-11-20 15:20:58.069853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.325 [2024-11-20 15:20:58.070015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.325 [2024-11-20 15:20:58.070015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.586 I/O targets: 00:11:09.586 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:09.586 00:11:09.586 00:11:09.586 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.586 http://cunit.sourceforge.net/ 00:11:09.586 00:11:09.586 00:11:09.586 Suite: bdevio tests on: Nvme1n1 00:11:09.586 Test: blockdev write read block ...passed 00:11:09.586 Test: blockdev write zeroes read block ...passed 00:11:09.586 Test: blockdev write zeroes read no split ...passed 00:11:09.586 Test: blockdev write zeroes read split ...passed 00:11:09.586 Test: blockdev write zeroes read split partial ...passed 00:11:09.586 Test: blockdev reset ...[2024-11-20 15:20:58.446335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:09.586 [2024-11-20 15:20:58.446436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18bc970 (9): Bad file descriptor 00:11:09.586 [2024-11-20 15:20:58.462205] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:09.586 passed 00:11:09.586 Test: blockdev write read 8 blocks ...passed 00:11:09.586 Test: blockdev write read size > 128k ...passed 00:11:09.586 Test: blockdev write read invalid size ...passed 00:11:09.846 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.846 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.846 Test: blockdev write read max offset ...passed 00:11:09.846 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.846 Test: blockdev writev readv 8 blocks ...passed 00:11:09.846 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.846 Test: blockdev writev readv block ...passed 00:11:09.846 Test: blockdev writev readv size > 128k ...passed 00:11:09.846 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.846 Test: blockdev comparev and writev ...[2024-11-20 15:20:58.687565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.687614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:09.846 [2024-11-20 15:20:58.687631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:09.846 [2024-11-20 15:20:58.688186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.688201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:09.846 [2024-11-20 15:20:58.688215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.688223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:09.846 [2024-11-20 15:20:58.688811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.688825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:09.846 [2024-11-20 15:20:58.688839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.688847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:09.846 [2024-11-20 15:20:58.689430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.846 [2024-11-20 15:20:58.689443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:09.847 [2024-11-20 15:20:58.689458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.847 [2024-11-20 15:20:58.689465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:09.847 passed 00:11:09.847 Test: blockdev nvme passthru rw ...passed 00:11:09.847 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:20:58.773809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.847 [2024-11-20 15:20:58.773827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:09.847 [2024-11-20 15:20:58.774242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.847 [2024-11-20 15:20:58.774257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:09.847 [2024-11-20 15:20:58.774627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.847 [2024-11-20 15:20:58.774639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:09.847 [2024-11-20 15:20:58.775026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.847 [2024-11-20 15:20:58.775044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:09.847 passed 00:11:09.847 Test: blockdev nvme admin passthru ...passed 00:11:10.106 Test: blockdev copy ...passed 00:11:10.106 00:11:10.106 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.106 suites 1 1 n/a 0 0 00:11:10.106 tests 23 23 23 0 0 00:11:10.106 asserts 152 152 152 0 n/a 00:11:10.106 00:11:10.106 Elapsed time = 1.037 seconds 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.107 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.107 rmmod nvme_tcp 00:11:10.107 rmmod nvme_fabrics 00:11:10.107 rmmod nvme_keyring 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 469859 ']' 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 469859 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 469859 ']' 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 469859 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.107 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 469859 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 469859' 00:11:10.366 killing process with pid 469859 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 469859 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 469859 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.366 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.367 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.917 00:11:12.917 real 0m12.163s 00:11:12.917 user 0m12.928s 00:11:12.917 sys 0m6.301s 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:12.917 ************************************ 00:11:12.917 END TEST nvmf_bdevio 00:11:12.917 ************************************ 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:12.917 00:11:12.917 real 5m6.013s 00:11:12.917 user 11m57.091s 00:11:12.917 sys 1m51.342s 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.917 ************************************ 00:11:12.917 END TEST nvmf_target_core 00:11:12.917 ************************************ 00:11:12.917 15:21:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:12.917 15:21:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.917 15:21:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.917 15:21:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.917 ************************************ 00:11:12.917 START TEST nvmf_target_extra 00:11:12.917 ************************************ 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:12.917 * Looking for test storage... 00:11:12.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.917 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:12.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.917 --rc genhtml_branch_coverage=1 00:11:12.917 --rc genhtml_function_coverage=1 00:11:12.918 --rc genhtml_legend=1 00:11:12.918 --rc geninfo_all_blocks=1 00:11:12.918 --rc geninfo_unexecuted_blocks=1 00:11:12.918 00:11:12.918 ' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.918 --rc genhtml_branch_coverage=1 00:11:12.918 --rc genhtml_function_coverage=1 00:11:12.918 --rc genhtml_legend=1 00:11:12.918 --rc geninfo_all_blocks=1 00:11:12.918 --rc geninfo_unexecuted_blocks=1 00:11:12.918 00:11:12.918 ' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.918 --rc genhtml_branch_coverage=1 00:11:12.918 --rc genhtml_function_coverage=1 00:11:12.918 --rc genhtml_legend=1 00:11:12.918 --rc geninfo_all_blocks=1 00:11:12.918 --rc geninfo_unexecuted_blocks=1 00:11:12.918 00:11:12.918 ' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.918 --rc genhtml_branch_coverage=1 00:11:12.918 --rc genhtml_function_coverage=1 00:11:12.918 --rc genhtml_legend=1 00:11:12.918 --rc geninfo_all_blocks=1 00:11:12.918 --rc geninfo_unexecuted_blocks=1 00:11:12.918 00:11:12.918 ' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.918 ************************************ 00:11:12.918 START TEST nvmf_example 00:11:12.918 ************************************ 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:12.918 * Looking for test storage... 00:11:12.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.918 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.180 --rc genhtml_branch_coverage=1 00:11:13.180 --rc genhtml_function_coverage=1 00:11:13.180 --rc genhtml_legend=1 00:11:13.180 --rc geninfo_all_blocks=1 00:11:13.180 --rc geninfo_unexecuted_blocks=1 00:11:13.180 00:11:13.180 ' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.180 --rc genhtml_branch_coverage=1 00:11:13.180 --rc genhtml_function_coverage=1 00:11:13.180 --rc genhtml_legend=1 00:11:13.180 --rc geninfo_all_blocks=1 00:11:13.180 --rc geninfo_unexecuted_blocks=1 00:11:13.180 00:11:13.180 ' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.180 --rc genhtml_branch_coverage=1 00:11:13.180 --rc genhtml_function_coverage=1 00:11:13.180 --rc genhtml_legend=1 00:11:13.180 --rc geninfo_all_blocks=1 00:11:13.180 --rc geninfo_unexecuted_blocks=1 00:11:13.180 00:11:13.180 ' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.180 --rc genhtml_branch_coverage=1 00:11:13.180 --rc genhtml_function_coverage=1 00:11:13.180 --rc genhtml_legend=1 00:11:13.180 --rc geninfo_all_blocks=1 00:11:13.180 --rc geninfo_unexecuted_blocks=1 00:11:13.180 00:11:13.180 ' 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.180 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.181 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:21.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:21.327 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:21.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:21.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:11:21.327 00:11:21.327 --- 10.0.0.2 ping statistics --- 00:11:21.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.327 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:11:21.327 00:11:21.327 --- 10.0.0.1 ping statistics --- 00:11:21.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.327 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:21.327 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=474959 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 474959 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 474959 ']' 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.328 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.589 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:21.850 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:34.080 Initializing NVMe Controllers 00:11:34.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:34.080 Initialization complete. Launching workers. 00:11:34.080 ======================================================== 00:11:34.080 Latency(us) 00:11:34.080 Device Information : IOPS MiB/s Average min max 00:11:34.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18565.83 72.52 3446.84 627.04 17757.32 00:11:34.080 ======================================================== 00:11:34.080 Total : 18565.83 72.52 3446.84 627.04 17757.32 00:11:34.080 00:11:34.080 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:34.080 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:34.080 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.081 rmmod nvme_tcp 00:11:34.081 rmmod nvme_fabrics 00:11:34.081 rmmod nvme_keyring 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 474959 ']' 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 474959 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 474959 ']' 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 474959 00:11:34.081 15:21:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474959 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474959' 00:11:34.081 killing process with pid 474959 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 474959 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 474959 00:11:34.081 nvmf threads initialize successfully 00:11:34.081 bdev subsystem init successfully 00:11:34.081 created a nvmf target service 00:11:34.081 create targets's poll groups done 00:11:34.081 all subsystems of target started 00:11:34.081 nvmf target is running 00:11:34.081 all subsystems of target stopped 00:11:34.081 destroy targets's poll groups done 00:11:34.081 destroyed the nvmf target service 00:11:34.081 bdev subsystem finish successfully 00:11:34.081 nvmf threads destroy successfully 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.081 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.342 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.342 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:34.342 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.342 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.604 00:11:34.604 real 0m21.573s 00:11:34.604 user 0m47.221s 00:11:34.604 sys 0m7.048s 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.604 ************************************ 00:11:34.604 END TEST nvmf_example 00:11:34.604 ************************************ 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.604 ************************************ 00:11:34.604 START TEST nvmf_filesystem 00:11:34.604 ************************************ 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:34.604 * Looking for test storage... 00:11:34.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.604 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.869 --rc genhtml_branch_coverage=1 00:11:34.869 --rc genhtml_function_coverage=1 00:11:34.869 --rc genhtml_legend=1 00:11:34.869 --rc geninfo_all_blocks=1 00:11:34.869 --rc geninfo_unexecuted_blocks=1 00:11:34.869 00:11:34.869 ' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.869 --rc genhtml_branch_coverage=1 00:11:34.869 --rc genhtml_function_coverage=1 00:11:34.869 --rc genhtml_legend=1 00:11:34.869 --rc geninfo_all_blocks=1 00:11:34.869 --rc geninfo_unexecuted_blocks=1 00:11:34.869 00:11:34.869 ' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.869 --rc genhtml_branch_coverage=1 00:11:34.869 --rc genhtml_function_coverage=1 00:11:34.869 --rc genhtml_legend=1 00:11:34.869 --rc geninfo_all_blocks=1 00:11:34.869 --rc geninfo_unexecuted_blocks=1 00:11:34.869 00:11:34.869 ' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.869 --rc genhtml_branch_coverage=1 00:11:34.869 --rc genhtml_function_coverage=1 00:11:34.869 --rc genhtml_legend=1 00:11:34.869 --rc geninfo_all_blocks=1 00:11:34.869 --rc geninfo_unexecuted_blocks=1 00:11:34.869 00:11:34.869 ' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:34.869 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:34.870 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:34.870 #define SPDK_CONFIG_H 00:11:34.870 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:34.870 #define SPDK_CONFIG_APPS 1 00:11:34.870 #define SPDK_CONFIG_ARCH native 00:11:34.870 #undef SPDK_CONFIG_ASAN 00:11:34.870 #undef SPDK_CONFIG_AVAHI 00:11:34.870 #undef SPDK_CONFIG_CET 00:11:34.870 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:34.870 #define SPDK_CONFIG_COVERAGE 1 00:11:34.870 #define SPDK_CONFIG_CROSS_PREFIX 00:11:34.870 #undef SPDK_CONFIG_CRYPTO 00:11:34.870 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:34.870 #undef SPDK_CONFIG_CUSTOMOCF 00:11:34.870 #undef SPDK_CONFIG_DAOS 00:11:34.870 #define SPDK_CONFIG_DAOS_DIR 00:11:34.870 #define SPDK_CONFIG_DEBUG 1 00:11:34.870 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:34.870 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:34.870 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:34.870 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:34.870 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:34.870 #undef SPDK_CONFIG_DPDK_UADK 00:11:34.870 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:34.870 #define SPDK_CONFIG_EXAMPLES 1 00:11:34.870 #undef SPDK_CONFIG_FC 00:11:34.870 #define SPDK_CONFIG_FC_PATH 00:11:34.870 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:34.870 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:34.870 #define SPDK_CONFIG_FSDEV 1 00:11:34.870 #undef SPDK_CONFIG_FUSE 00:11:34.870 #undef SPDK_CONFIG_FUZZER 00:11:34.870 #define SPDK_CONFIG_FUZZER_LIB 00:11:34.870 #undef SPDK_CONFIG_GOLANG 00:11:34.870 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:34.870 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:34.870 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:34.870 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:34.870 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:34.870 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:34.870 #undef SPDK_CONFIG_HAVE_LZ4 00:11:34.870 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:34.870 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:34.870 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:34.870 #define SPDK_CONFIG_IDXD 1 00:11:34.870 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:34.870 #undef SPDK_CONFIG_IPSEC_MB 00:11:34.870 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:34.870 #define SPDK_CONFIG_ISAL 1 00:11:34.870 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:34.870 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:34.870 #define SPDK_CONFIG_LIBDIR 00:11:34.870 #undef SPDK_CONFIG_LTO 00:11:34.870 #define SPDK_CONFIG_MAX_LCORES 128 00:11:34.870 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:34.870 #define SPDK_CONFIG_NVME_CUSE 1 00:11:34.870 #undef SPDK_CONFIG_OCF 00:11:34.870 #define SPDK_CONFIG_OCF_PATH 00:11:34.870 #define SPDK_CONFIG_OPENSSL_PATH 00:11:34.870 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:34.870 #define SPDK_CONFIG_PGO_DIR 00:11:34.870 #undef SPDK_CONFIG_PGO_USE 00:11:34.870 #define SPDK_CONFIG_PREFIX /usr/local 00:11:34.870 #undef SPDK_CONFIG_RAID5F 00:11:34.870 #undef SPDK_CONFIG_RBD 00:11:34.870 #define SPDK_CONFIG_RDMA 1 00:11:34.870 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:34.870 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:34.870 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:34.870 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:34.870 #define SPDK_CONFIG_SHARED 1 00:11:34.870 #undef SPDK_CONFIG_SMA 00:11:34.870 #define SPDK_CONFIG_TESTS 1 00:11:34.870 #undef SPDK_CONFIG_TSAN 00:11:34.870 #define SPDK_CONFIG_UBLK 1 00:11:34.870 #define SPDK_CONFIG_UBSAN 1 00:11:34.870 #undef SPDK_CONFIG_UNIT_TESTS 00:11:34.870 #undef SPDK_CONFIG_URING 00:11:34.870 #define SPDK_CONFIG_URING_PATH 00:11:34.870 #undef SPDK_CONFIG_URING_ZNS 00:11:34.870 #undef SPDK_CONFIG_USDT 00:11:34.870 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:34.870 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:34.870 #define SPDK_CONFIG_VFIO_USER 1 00:11:34.870 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:34.870 #define SPDK_CONFIG_VHOST 1 00:11:34.871 #define SPDK_CONFIG_VIRTIO 1 00:11:34.871 #undef SPDK_CONFIG_VTUNE 00:11:34.871 #define SPDK_CONFIG_VTUNE_DIR 00:11:34.871 #define SPDK_CONFIG_WERROR 1 00:11:34.871 #define SPDK_CONFIG_WPDK_DIR 00:11:34.871 #undef SPDK_CONFIG_XNVME 00:11:34.871 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:34.871 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:34.872 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 477980 ]] 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 477980 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.C8wxOT 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.C8wxOT/tests/target /tmp/spdk.C8wxOT 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:34.873 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118327709696 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11028799488 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677707776 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=548864 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:34.874 * Looking for test storage... 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118327709696 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13243392000 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.874 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.137 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.137 --rc genhtml_branch_coverage=1 00:11:35.137 --rc genhtml_function_coverage=1 00:11:35.137 --rc genhtml_legend=1 00:11:35.137 --rc geninfo_all_blocks=1 00:11:35.137 --rc geninfo_unexecuted_blocks=1 00:11:35.137 00:11:35.137 ' 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.138 --rc genhtml_branch_coverage=1 00:11:35.138 --rc genhtml_function_coverage=1 00:11:35.138 --rc genhtml_legend=1 00:11:35.138 --rc geninfo_all_blocks=1 00:11:35.138 --rc geninfo_unexecuted_blocks=1 00:11:35.138 00:11:35.138 ' 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.138 --rc genhtml_branch_coverage=1 00:11:35.138 --rc genhtml_function_coverage=1 00:11:35.138 --rc genhtml_legend=1 00:11:35.138 --rc geninfo_all_blocks=1 00:11:35.138 --rc geninfo_unexecuted_blocks=1 00:11:35.138 00:11:35.138 ' 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.138 --rc genhtml_branch_coverage=1 00:11:35.138 --rc genhtml_function_coverage=1 00:11:35.138 --rc genhtml_legend=1 00:11:35.138 --rc geninfo_all_blocks=1 00:11:35.138 --rc geninfo_unexecuted_blocks=1 00:11:35.138 00:11:35.138 ' 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.138 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.139 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.281 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.282 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.282 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:11:43.282 00:11:43.282 --- 10.0.0.2 ping statistics --- 00:11:43.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.282 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:11:43.282 00:11:43.282 --- 10.0.0.1 ping statistics --- 00:11:43.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.282 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.282 ************************************ 00:11:43.282 START TEST nvmf_filesystem_no_in_capsule 00:11:43.282 ************************************ 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=481915 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 481915 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 481915 ']' 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.282 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.282 [2024-11-20 15:21:31.520002] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:11:43.282 [2024-11-20 15:21:31.520063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.282 [2024-11-20 15:21:31.619780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.282 [2024-11-20 15:21:31.672796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.282 [2024-11-20 15:21:31.672851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.282 [2024-11-20 15:21:31.672860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.282 [2024-11-20 15:21:31.672867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.282 [2024-11-20 15:21:31.672874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.282 [2024-11-20 15:21:31.675057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.283 [2024-11-20 15:21:31.675216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.283 [2024-11-20 15:21:31.675303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.283 [2024-11-20 15:21:31.675304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.543 [2024-11-20 15:21:32.396188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.543 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.805 Malloc1 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.805 [2024-11-20 15:21:32.547778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.805 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:43.805 { 00:11:43.805 "name": "Malloc1", 00:11:43.805 "aliases": [ 00:11:43.805 "ccd6f04b-9fd6-411a-982c-972b1cf506b3" 00:11:43.805 ], 00:11:43.805 "product_name": "Malloc disk", 00:11:43.805 "block_size": 512, 00:11:43.805 "num_blocks": 1048576, 00:11:43.805 "uuid": "ccd6f04b-9fd6-411a-982c-972b1cf506b3", 00:11:43.805 "assigned_rate_limits": { 00:11:43.805 "rw_ios_per_sec": 0, 00:11:43.805 "rw_mbytes_per_sec": 0, 00:11:43.805 "r_mbytes_per_sec": 0, 00:11:43.805 "w_mbytes_per_sec": 0 00:11:43.805 }, 00:11:43.806 "claimed": true, 00:11:43.806 "claim_type": "exclusive_write", 00:11:43.806 "zoned": false, 00:11:43.806 "supported_io_types": { 00:11:43.806 "read": true, 00:11:43.806 "write": true, 00:11:43.806 "unmap": true, 00:11:43.806 "flush": true, 00:11:43.806 "reset": true, 00:11:43.806 "nvme_admin": false, 00:11:43.806 "nvme_io": false, 00:11:43.806 "nvme_io_md": false, 00:11:43.806 "write_zeroes": true, 00:11:43.806 "zcopy": true, 00:11:43.806 "get_zone_info": false, 00:11:43.806 "zone_management": false, 00:11:43.806 "zone_append": false, 00:11:43.806 "compare": false, 00:11:43.806 "compare_and_write": false, 00:11:43.806 "abort": true, 00:11:43.806 "seek_hole": false, 00:11:43.806 "seek_data": false, 00:11:43.806 "copy": true, 00:11:43.806 "nvme_iov_md": false 00:11:43.806 }, 00:11:43.806 "memory_domains": [ 00:11:43.806 { 00:11:43.806 "dma_device_id": "system", 00:11:43.806 "dma_device_type": 1 00:11:43.806 }, 00:11:43.806 { 00:11:43.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.806 "dma_device_type": 2 00:11:43.806 } 00:11:43.806 ], 00:11:43.806 "driver_specific": {} 00:11:43.806 } 00:11:43.806 ]' 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:43.806 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.721 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.721 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:45.721 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.721 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:45.721 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.634 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:48.206 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.590 ************************************ 00:11:49.590 START TEST filesystem_ext4 00:11:49.590 ************************************ 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:49.590 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:49.590 mke2fs 1.47.0 (5-Feb-2023) 00:11:49.590 Discarding device blocks: 0/522240 done 00:11:49.590 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:49.590 Filesystem UUID: 4942134b-102c-424f-aed5-44ea3481a190 00:11:49.590 Superblock backups stored on blocks: 00:11:49.590 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:49.590 00:11:49.590 Allocating group tables: 0/64 done 00:11:49.590 Writing inode tables: 0/64 done 00:11:50.973 Creating journal (8192 blocks): done 00:11:50.973 Writing superblocks and filesystem accounting information: 0/64 done 00:11:50.973 00:11:50.973 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:50.973 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 481915 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.676 00:11:57.676 real 0m7.496s 00:11:57.676 user 0m0.023s 00:11:57.676 sys 0m0.083s 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:57.676 ************************************ 00:11:57.676 END TEST filesystem_ext4 00:11:57.676 ************************************ 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.676 ************************************ 00:11:57.676 START TEST filesystem_btrfs 00:11:57.676 ************************************ 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:57.676 15:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:57.676 btrfs-progs v6.8.1 00:11:57.676 See https://btrfs.readthedocs.io for more information. 00:11:57.676 00:11:57.676 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:57.676 NOTE: several default settings have changed in version 5.15, please make sure 00:11:57.676 this does not affect your deployments: 00:11:57.676 - DUP for metadata (-m dup) 00:11:57.676 - enabled no-holes (-O no-holes) 00:11:57.676 - enabled free-space-tree (-R free-space-tree) 00:11:57.676 00:11:57.676 Label: (null) 00:11:57.676 UUID: 0d6d4d00-2ba6-41c1-a89e-f71021add843 00:11:57.676 Node size: 16384 00:11:57.676 Sector size: 4096 (CPU page size: 4096) 00:11:57.677 Filesystem size: 510.00MiB 00:11:57.677 Block group profiles: 00:11:57.677 Data: single 8.00MiB 00:11:57.677 Metadata: DUP 32.00MiB 00:11:57.677 System: DUP 8.00MiB 00:11:57.677 SSD detected: yes 00:11:57.677 Zoned device: no 00:11:57.677 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:57.677 Checksum: crc32c 00:11:57.677 Number of devices: 1 00:11:57.677 Devices: 00:11:57.677 ID SIZE PATH 00:11:57.677 1 510.00MiB /dev/nvme0n1p1 00:11:57.677 00:11:57.677 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:57.677 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 481915 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.938 00:11:57.938 real 0m0.983s 00:11:57.938 user 0m0.021s 00:11:57.938 sys 0m0.128s 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.938 ************************************ 00:11:57.938 END TEST filesystem_btrfs 00:11:57.938 ************************************ 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.938 ************************************ 00:11:57.938 START TEST filesystem_xfs 00:11:57.938 ************************************ 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:57.938 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:57.938 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:57.938 = sectsz=512 attr=2, projid32bit=1 00:11:57.938 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:57.938 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:57.938 data = bsize=4096 blocks=130560, imaxpct=25 00:11:57.938 = sunit=0 swidth=0 blks 00:11:57.938 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:57.938 log =internal log bsize=4096 blocks=16384, version=2 00:11:57.938 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:57.938 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:59.321 Discarding blocks...Done. 00:11:59.322 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:59.322 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 481915 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.235 00:12:01.235 real 0m3.031s 00:12:01.235 user 0m0.025s 00:12:01.235 sys 0m0.081s 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.235 ************************************ 00:12:01.235 END TEST filesystem_xfs 00:12:01.235 ************************************ 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:01.235 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 481915 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 481915 ']' 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 481915 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481915 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481915' 00:12:01.235 killing process with pid 481915 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 481915 00:12:01.235 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 481915 00:12:01.495 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:01.496 00:12:01.496 real 0m18.921s 00:12:01.496 user 1m14.679s 00:12:01.496 sys 0m1.491s 00:12:01.496 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.496 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.496 ************************************ 00:12:01.496 END TEST nvmf_filesystem_no_in_capsule 00:12:01.496 ************************************ 00:12:01.496 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:01.496 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.496 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.496 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.756 ************************************ 00:12:01.756 START TEST nvmf_filesystem_in_capsule 00:12:01.756 ************************************ 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=485818 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 485818 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 485818 ']' 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.756 15:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.756 [2024-11-20 15:21:50.519706] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:12:01.756 [2024-11-20 15:21:50.519762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.756 [2024-11-20 15:21:50.614974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.756 [2024-11-20 15:21:50.645738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.756 [2024-11-20 15:21:50.645767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.756 [2024-11-20 15:21:50.645773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.756 [2024-11-20 15:21:50.645781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.756 [2024-11-20 15:21:50.645786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.757 [2024-11-20 15:21:50.647152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.757 [2024-11-20 15:21:50.647304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.757 [2024-11-20 15:21:50.647408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.757 [2024-11-20 15:21:50.647412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.698 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.698 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:02.698 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.698 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.698 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 [2024-11-20 15:21:51.368618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 [2024-11-20 15:21:51.502232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:02.699 { 00:12:02.699 "name": "Malloc1", 00:12:02.699 "aliases": [ 00:12:02.699 "3c24e047-d3c3-4188-99f3-981a62a5dd02" 00:12:02.699 ], 00:12:02.699 "product_name": "Malloc disk", 00:12:02.699 "block_size": 512, 00:12:02.699 "num_blocks": 1048576, 00:12:02.699 "uuid": "3c24e047-d3c3-4188-99f3-981a62a5dd02", 00:12:02.699 "assigned_rate_limits": { 00:12:02.699 "rw_ios_per_sec": 0, 00:12:02.699 "rw_mbytes_per_sec": 0, 00:12:02.699 "r_mbytes_per_sec": 0, 00:12:02.699 "w_mbytes_per_sec": 0 00:12:02.699 }, 00:12:02.699 "claimed": true, 00:12:02.699 "claim_type": "exclusive_write", 00:12:02.699 "zoned": false, 00:12:02.699 "supported_io_types": { 00:12:02.699 "read": true, 00:12:02.699 "write": true, 00:12:02.699 "unmap": true, 00:12:02.699 "flush": true, 00:12:02.699 "reset": true, 00:12:02.699 "nvme_admin": false, 00:12:02.699 "nvme_io": false, 00:12:02.699 "nvme_io_md": false, 00:12:02.699 "write_zeroes": true, 00:12:02.699 "zcopy": true, 00:12:02.699 "get_zone_info": false, 00:12:02.699 "zone_management": false, 00:12:02.699 "zone_append": false, 00:12:02.699 "compare": false, 00:12:02.699 "compare_and_write": false, 00:12:02.699 "abort": true, 00:12:02.699 "seek_hole": false, 00:12:02.699 "seek_data": false, 00:12:02.699 "copy": true, 00:12:02.699 "nvme_iov_md": false 00:12:02.699 }, 00:12:02.699 "memory_domains": [ 00:12:02.699 { 00:12:02.699 "dma_device_id": "system", 00:12:02.699 "dma_device_type": 1 00:12:02.699 }, 00:12:02.699 { 00:12:02.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.699 "dma_device_type": 2 00:12:02.699 } 00:12:02.699 ], 00:12:02.699 "driver_specific": {} 00:12:02.699 } 00:12:02.699 ]' 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:02.699 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.612 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.612 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.612 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.612 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:04.612 15:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:06.524 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:06.785 15:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:07.731 15:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.675 ************************************ 00:12:08.675 START TEST filesystem_in_capsule_ext4 00:12:08.675 ************************************ 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:08.675 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:08.675 mke2fs 1.47.0 (5-Feb-2023) 00:12:08.675 Discarding device blocks: 0/522240 done 00:12:08.675 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:08.675 Filesystem UUID: 53ff1509-d465-4cb9-8beb-5fae00dba884 00:12:08.675 Superblock backups stored on blocks: 00:12:08.675 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:08.675 00:12:08.675 Allocating group tables: 0/64 done 00:12:08.675 Writing inode tables: 0/64 done 00:12:08.935 Creating journal (8192 blocks): done 00:12:11.262 Writing superblocks and filesystem accounting information: 0/64 done 00:12:11.262 00:12:11.262 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:11.262 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 485818 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.844 00:12:17.844 real 0m8.161s 00:12:17.844 user 0m0.034s 00:12:17.844 sys 0m0.072s 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:17.844 ************************************ 00:12:17.844 END TEST filesystem_in_capsule_ext4 00:12:17.844 ************************************ 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.844 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.844 ************************************ 00:12:17.844 START TEST filesystem_in_capsule_btrfs 00:12:17.844 ************************************ 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:17.845 btrfs-progs v6.8.1 00:12:17.845 See https://btrfs.readthedocs.io for more information. 00:12:17.845 00:12:17.845 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:17.845 NOTE: several default settings have changed in version 5.15, please make sure 00:12:17.845 this does not affect your deployments: 00:12:17.845 - DUP for metadata (-m dup) 00:12:17.845 - enabled no-holes (-O no-holes) 00:12:17.845 - enabled free-space-tree (-R free-space-tree) 00:12:17.845 00:12:17.845 Label: (null) 00:12:17.845 UUID: f11581f0-80b3-4a78-8b15-6e18170f7b82 00:12:17.845 Node size: 16384 00:12:17.845 Sector size: 4096 (CPU page size: 4096) 00:12:17.845 Filesystem size: 510.00MiB 00:12:17.845 Block group profiles: 00:12:17.845 Data: single 8.00MiB 00:12:17.845 Metadata: DUP 32.00MiB 00:12:17.845 System: DUP 8.00MiB 00:12:17.845 SSD detected: yes 00:12:17.845 Zoned device: no 00:12:17.845 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:17.845 Checksum: crc32c 00:12:17.845 Number of devices: 1 00:12:17.845 Devices: 00:12:17.845 ID SIZE PATH 00:12:17.845 1 510.00MiB /dev/nvme0n1p1 00:12:17.845 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:17.845 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 485818 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.845 00:12:17.845 real 0m0.438s 00:12:17.845 user 0m0.023s 00:12:17.845 sys 0m0.122s 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 ************************************ 00:12:17.845 END TEST filesystem_in_capsule_btrfs 00:12:17.845 ************************************ 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 ************************************ 00:12:17.845 START TEST filesystem_in_capsule_xfs 00:12:17.845 ************************************ 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:17.845 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:17.845 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:17.845 = sectsz=512 attr=2, projid32bit=1 00:12:17.845 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:17.845 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:17.845 data = bsize=4096 blocks=130560, imaxpct=25 00:12:17.845 = sunit=0 swidth=0 blks 00:12:17.845 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:17.845 log =internal log bsize=4096 blocks=16384, version=2 00:12:17.845 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:17.845 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:18.417 Discarding blocks...Done. 00:12:18.417 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:18.417 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 485818 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.964 00:12:20.964 real 0m3.537s 00:12:20.964 user 0m0.028s 00:12:20.964 sys 0m0.079s 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:20.964 ************************************ 00:12:20.964 END TEST filesystem_in_capsule_xfs 00:12:20.964 ************************************ 00:12:20.964 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:21.224 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:21.224 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 485818 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 485818 ']' 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 485818 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485818 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485818' 00:12:21.486 killing process with pid 485818 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 485818 00:12:21.486 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 485818 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:21.747 00:12:21.747 real 0m20.125s 00:12:21.747 user 1m19.622s 00:12:21.747 sys 0m1.443s 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.747 ************************************ 00:12:21.747 END TEST nvmf_filesystem_in_capsule 00:12:21.747 ************************************ 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.747 rmmod nvme_tcp 00:12:21.747 rmmod nvme_fabrics 00:12:21.747 rmmod nvme_keyring 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.747 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.748 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.748 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.292 00:12:24.292 real 0m49.394s 00:12:24.292 user 2m36.652s 00:12:24.292 sys 0m8.894s 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.292 ************************************ 00:12:24.292 END TEST nvmf_filesystem 00:12:24.292 ************************************ 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.292 ************************************ 00:12:24.292 START TEST nvmf_target_discovery 00:12:24.292 ************************************ 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:24.292 * Looking for test storage... 00:12:24.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:24.292 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:24.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.292 --rc genhtml_branch_coverage=1 00:12:24.292 --rc genhtml_function_coverage=1 00:12:24.292 --rc genhtml_legend=1 00:12:24.292 --rc geninfo_all_blocks=1 00:12:24.292 --rc geninfo_unexecuted_blocks=1 00:12:24.292 00:12:24.292 ' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:24.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.292 --rc genhtml_branch_coverage=1 00:12:24.292 --rc genhtml_function_coverage=1 00:12:24.292 --rc genhtml_legend=1 00:12:24.292 --rc geninfo_all_blocks=1 00:12:24.292 --rc geninfo_unexecuted_blocks=1 00:12:24.292 00:12:24.292 ' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:24.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.292 --rc genhtml_branch_coverage=1 00:12:24.292 --rc genhtml_function_coverage=1 00:12:24.292 --rc genhtml_legend=1 00:12:24.292 --rc geninfo_all_blocks=1 00:12:24.292 --rc geninfo_unexecuted_blocks=1 00:12:24.292 00:12:24.292 ' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:24.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.292 --rc genhtml_branch_coverage=1 00:12:24.292 --rc genhtml_function_coverage=1 00:12:24.292 --rc genhtml_legend=1 00:12:24.292 --rc geninfo_all_blocks=1 00:12:24.292 --rc geninfo_unexecuted_blocks=1 00:12:24.292 00:12:24.292 ' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.292 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.293 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.442 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:32.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:32.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:32.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:32.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:12:32.443 00:12:32.443 --- 10.0.0.2 ping statistics --- 00:12:32.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.443 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:12:32.443 00:12:32.443 --- 10.0.0.1 ping statistics --- 00:12:32.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.443 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=494106 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 494106 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 494106 ']' 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.443 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.444 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.444 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.444 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.444 [2024-11-20 15:22:20.675329] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:12:32.444 [2024-11-20 15:22:20.675393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.444 [2024-11-20 15:22:20.777475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.444 [2024-11-20 15:22:20.831458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.444 [2024-11-20 15:22:20.831507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.444 [2024-11-20 15:22:20.831516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.444 [2024-11-20 15:22:20.831524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.444 [2024-11-20 15:22:20.831532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.444 [2024-11-20 15:22:20.833326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.444 [2024-11-20 15:22:20.833376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.444 [2024-11-20 15:22:20.833536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.444 [2024-11-20 15:22:20.833536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.706 [2024-11-20 15:22:21.553180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.706 Null1 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.706 [2024-11-20 15:22:21.613720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.706 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.707 Null2 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.707 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 Null3 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 Null4 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.981 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:33.244 00:12:33.244 Discovery Log Number of Records 6, Generation counter 6 00:12:33.244 =====Discovery Log Entry 0====== 00:12:33.244 trtype: tcp 00:12:33.244 adrfam: ipv4 00:12:33.244 subtype: current discovery subsystem 00:12:33.244 treq: not required 00:12:33.244 portid: 0 00:12:33.244 trsvcid: 4420 00:12:33.244 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.244 traddr: 10.0.0.2 00:12:33.244 eflags: explicit discovery connections, duplicate discovery information 00:12:33.244 sectype: none 00:12:33.244 =====Discovery Log Entry 1====== 00:12:33.244 trtype: tcp 00:12:33.244 adrfam: ipv4 00:12:33.244 subtype: nvme subsystem 00:12:33.244 treq: not required 00:12:33.244 portid: 0 00:12:33.244 trsvcid: 4420 00:12:33.244 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:33.244 traddr: 10.0.0.2 00:12:33.244 eflags: none 00:12:33.244 sectype: none 00:12:33.244 =====Discovery Log Entry 2====== 00:12:33.244 trtype: tcp 00:12:33.244 adrfam: ipv4 00:12:33.244 subtype: nvme subsystem 00:12:33.244 treq: not required 00:12:33.244 portid: 0 00:12:33.244 trsvcid: 4420 00:12:33.244 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:33.244 traddr: 10.0.0.2 00:12:33.244 eflags: none 00:12:33.244 sectype: none 00:12:33.244 =====Discovery Log Entry 3====== 00:12:33.244 trtype: tcp 00:12:33.244 adrfam: ipv4 00:12:33.244 subtype: nvme subsystem 00:12:33.244 treq: not required 00:12:33.244 portid: 0 00:12:33.244 trsvcid: 4420 00:12:33.244 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:33.244 traddr: 10.0.0.2 00:12:33.244 eflags: none 00:12:33.244 sectype: none 00:12:33.244 =====Discovery Log Entry 4====== 00:12:33.244 trtype: tcp 00:12:33.244 adrfam: ipv4 00:12:33.244 subtype: nvme subsystem 00:12:33.244 treq: not required 00:12:33.244 portid: 0 00:12:33.244 trsvcid: 4420 00:12:33.244 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:33.244 traddr: 10.0.0.2 00:12:33.244 eflags: none 00:12:33.244 sectype: none 00:12:33.244 =====Discovery Log Entry 5====== 00:12:33.244 trtype: tcp 00:12:33.244 adrfam: ipv4 00:12:33.244 subtype: discovery subsystem referral 00:12:33.244 treq: not required 00:12:33.244 portid: 0 00:12:33.244 trsvcid: 4430 00:12:33.244 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.244 traddr: 10.0.0.2 00:12:33.244 eflags: none 00:12:33.244 sectype: none 00:12:33.244 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:33.244 Perform nvmf subsystem discovery via RPC 00:12:33.244 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:33.244 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.244 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.244 [ 00:12:33.244 { 00:12:33.244 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:33.244 "subtype": "Discovery", 00:12:33.244 "listen_addresses": [ 00:12:33.244 { 00:12:33.244 "trtype": "TCP", 00:12:33.244 "adrfam": "IPv4", 00:12:33.244 "traddr": "10.0.0.2", 00:12:33.244 "trsvcid": "4420" 00:12:33.244 } 00:12:33.244 ], 00:12:33.244 "allow_any_host": true, 00:12:33.244 "hosts": [] 00:12:33.244 }, 00:12:33.244 { 00:12:33.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.244 "subtype": "NVMe", 00:12:33.244 "listen_addresses": [ 00:12:33.244 { 00:12:33.244 "trtype": "TCP", 00:12:33.244 "adrfam": "IPv4", 00:12:33.244 "traddr": "10.0.0.2", 00:12:33.244 "trsvcid": "4420" 00:12:33.244 } 00:12:33.244 ], 00:12:33.245 "allow_any_host": true, 00:12:33.245 "hosts": [], 00:12:33.245 "serial_number": "SPDK00000000000001", 00:12:33.245 "model_number": "SPDK bdev Controller", 00:12:33.245 "max_namespaces": 32, 00:12:33.245 "min_cntlid": 1, 00:12:33.245 "max_cntlid": 65519, 00:12:33.245 "namespaces": [ 00:12:33.245 { 00:12:33.245 "nsid": 1, 00:12:33.245 "bdev_name": "Null1", 00:12:33.245 "name": "Null1", 00:12:33.245 "nguid": "B5AFA405E04F4837883448CFFF559A58", 00:12:33.245 "uuid": "b5afa405-e04f-4837-8834-48cfff559a58" 00:12:33.245 } 00:12:33.245 ] 00:12:33.245 }, 00:12:33.245 { 00:12:33.245 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:33.245 "subtype": "NVMe", 00:12:33.245 "listen_addresses": [ 00:12:33.245 { 00:12:33.245 "trtype": "TCP", 00:12:33.245 "adrfam": "IPv4", 00:12:33.245 "traddr": "10.0.0.2", 00:12:33.245 "trsvcid": "4420" 00:12:33.245 } 00:12:33.245 ], 00:12:33.245 "allow_any_host": true, 00:12:33.245 "hosts": [], 00:12:33.245 "serial_number": "SPDK00000000000002", 00:12:33.245 "model_number": "SPDK bdev Controller", 00:12:33.245 "max_namespaces": 32, 00:12:33.245 "min_cntlid": 1, 00:12:33.245 "max_cntlid": 65519, 00:12:33.245 "namespaces": [ 00:12:33.245 { 00:12:33.245 "nsid": 1, 00:12:33.245 "bdev_name": "Null2", 00:12:33.245 "name": "Null2", 00:12:33.245 "nguid": "3471C7EDEB0247D8A8B166E5AA0F34F6", 00:12:33.245 "uuid": "3471c7ed-eb02-47d8-a8b1-66e5aa0f34f6" 00:12:33.245 } 00:12:33.245 ] 00:12:33.245 }, 00:12:33.245 { 00:12:33.245 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:33.245 "subtype": "NVMe", 00:12:33.245 "listen_addresses": [ 00:12:33.245 { 00:12:33.245 "trtype": "TCP", 00:12:33.245 "adrfam": "IPv4", 00:12:33.245 "traddr": "10.0.0.2", 00:12:33.245 "trsvcid": "4420" 00:12:33.245 } 00:12:33.245 ], 00:12:33.245 "allow_any_host": true, 00:12:33.245 "hosts": [], 00:12:33.245 "serial_number": "SPDK00000000000003", 00:12:33.245 "model_number": "SPDK bdev Controller", 00:12:33.245 "max_namespaces": 32, 00:12:33.245 "min_cntlid": 1, 00:12:33.245 "max_cntlid": 65519, 00:12:33.245 "namespaces": [ 00:12:33.245 { 00:12:33.245 "nsid": 1, 00:12:33.245 "bdev_name": "Null3", 00:12:33.245 "name": "Null3", 00:12:33.245 "nguid": "F38010F15861438BBD12EFFCFAFC785A", 00:12:33.245 "uuid": "f38010f1-5861-438b-bd12-effcfafc785a" 00:12:33.245 } 00:12:33.245 ] 00:12:33.245 }, 00:12:33.245 { 00:12:33.245 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:33.245 "subtype": "NVMe", 00:12:33.245 "listen_addresses": [ 00:12:33.245 { 00:12:33.245 "trtype": "TCP", 00:12:33.245 "adrfam": "IPv4", 00:12:33.245 "traddr": "10.0.0.2", 00:12:33.245 "trsvcid": "4420" 00:12:33.245 } 00:12:33.245 ], 00:12:33.245 "allow_any_host": true, 00:12:33.245 "hosts": [], 00:12:33.245 "serial_number": "SPDK00000000000004", 00:12:33.245 "model_number": "SPDK bdev Controller", 00:12:33.245 "max_namespaces": 32, 00:12:33.245 "min_cntlid": 1, 00:12:33.245 "max_cntlid": 65519, 00:12:33.245 "namespaces": [ 00:12:33.245 { 00:12:33.245 "nsid": 1, 00:12:33.245 "bdev_name": "Null4", 00:12:33.245 "name": "Null4", 00:12:33.245 "nguid": "CFD2060DB93E440C8ACE77EDCC72FA39", 00:12:33.245 "uuid": "cfd2060d-b93e-440c-8ace-77edcc72fa39" 00:12:33.245 } 00:12:33.245 ] 00:12:33.245 } 00:12:33.245 ] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:33.245 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.507 rmmod nvme_tcp 00:12:33.507 rmmod nvme_fabrics 00:12:33.507 rmmod nvme_keyring 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 494106 ']' 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 494106 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 494106 ']' 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 494106 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 494106 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 494106' 00:12:33.507 killing process with pid 494106 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 494106 00:12:33.507 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 494106 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.769 15:22:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.685 00:12:35.685 real 0m11.713s 00:12:35.685 user 0m8.981s 00:12:35.685 sys 0m6.146s 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.685 ************************************ 00:12:35.685 END TEST nvmf_target_discovery 00:12:35.685 ************************************ 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.685 15:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.946 ************************************ 00:12:35.946 START TEST nvmf_referrals 00:12:35.946 ************************************ 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.946 * Looking for test storage... 00:12:35.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:35.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.946 --rc genhtml_branch_coverage=1 00:12:35.946 --rc genhtml_function_coverage=1 00:12:35.946 --rc genhtml_legend=1 00:12:35.946 --rc geninfo_all_blocks=1 00:12:35.946 --rc geninfo_unexecuted_blocks=1 00:12:35.946 00:12:35.946 ' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:35.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.946 --rc genhtml_branch_coverage=1 00:12:35.946 --rc genhtml_function_coverage=1 00:12:35.946 --rc genhtml_legend=1 00:12:35.946 --rc geninfo_all_blocks=1 00:12:35.946 --rc geninfo_unexecuted_blocks=1 00:12:35.946 00:12:35.946 ' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:35.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.946 --rc genhtml_branch_coverage=1 00:12:35.946 --rc genhtml_function_coverage=1 00:12:35.946 --rc genhtml_legend=1 00:12:35.946 --rc geninfo_all_blocks=1 00:12:35.946 --rc geninfo_unexecuted_blocks=1 00:12:35.946 00:12:35.946 ' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:35.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.946 --rc genhtml_branch_coverage=1 00:12:35.946 --rc genhtml_function_coverage=1 00:12:35.946 --rc genhtml_legend=1 00:12:35.946 --rc geninfo_all_blocks=1 00:12:35.946 --rc geninfo_unexecuted_blocks=1 00:12:35.946 00:12:35.946 ' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.946 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.947 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.208 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.208 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.208 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.208 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.350 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:44.351 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:44.351 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:44.351 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:44.351 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:12:44.351 00:12:44.351 --- 10.0.0.2 ping statistics --- 00:12:44.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.351 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:12:44.351 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:12:44.351 00:12:44.351 --- 10.0.0.1 ping statistics --- 00:12:44.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.351 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=498644 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 498644 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 498644 ']' 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.352 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.352 [2024-11-20 15:22:32.540360] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:12:44.352 [2024-11-20 15:22:32.540427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.352 [2024-11-20 15:22:32.639825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.352 [2024-11-20 15:22:32.693346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.352 [2024-11-20 15:22:32.693396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.352 [2024-11-20 15:22:32.693405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.352 [2024-11-20 15:22:32.693413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.352 [2024-11-20 15:22:32.693419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.352 [2024-11-20 15:22:32.695494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.352 [2024-11-20 15:22:32.695648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.352 [2024-11-20 15:22:32.695813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.352 [2024-11-20 15:22:32.695813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 [2024-11-20 15:22:33.419790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 [2024-11-20 15:22:33.436087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.613 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.874 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.875 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.136 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.136 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.397 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.659 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:45.659 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.659 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:45.659 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.659 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.659 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.920 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.921 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.182 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:46.182 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:46.182 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:46.182 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:46.182 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:46.182 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.182 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.443 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.705 rmmod nvme_tcp 00:12:46.705 rmmod nvme_fabrics 00:12:46.705 rmmod nvme_keyring 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 498644 ']' 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 498644 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 498644 ']' 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 498644 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 498644 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 498644' 00:12:46.705 killing process with pid 498644 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 498644 00:12:46.705 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 498644 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.967 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.879 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.879 00:12:48.879 real 0m13.164s 00:12:48.879 user 0m15.392s 00:12:48.879 sys 0m6.549s 00:12:48.879 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.879 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.879 ************************************ 00:12:48.879 END TEST nvmf_referrals 00:12:48.879 ************************************ 00:12:49.140 15:22:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:49.140 15:22:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.140 15:22:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.140 15:22:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.140 ************************************ 00:12:49.140 START TEST nvmf_connect_disconnect 00:12:49.140 ************************************ 00:12:49.140 15:22:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:49.140 * Looking for test storage... 00:12:49.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.140 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.402 --rc genhtml_branch_coverage=1 00:12:49.402 --rc genhtml_function_coverage=1 00:12:49.402 --rc genhtml_legend=1 00:12:49.402 --rc geninfo_all_blocks=1 00:12:49.402 --rc geninfo_unexecuted_blocks=1 00:12:49.402 00:12:49.402 ' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.402 --rc genhtml_branch_coverage=1 00:12:49.402 --rc genhtml_function_coverage=1 00:12:49.402 --rc genhtml_legend=1 00:12:49.402 --rc geninfo_all_blocks=1 00:12:49.402 --rc geninfo_unexecuted_blocks=1 00:12:49.402 00:12:49.402 ' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.402 --rc genhtml_branch_coverage=1 00:12:49.402 --rc genhtml_function_coverage=1 00:12:49.402 --rc genhtml_legend=1 00:12:49.402 --rc geninfo_all_blocks=1 00:12:49.402 --rc geninfo_unexecuted_blocks=1 00:12:49.402 00:12:49.402 ' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.402 --rc genhtml_branch_coverage=1 00:12:49.402 --rc genhtml_function_coverage=1 00:12:49.402 --rc genhtml_legend=1 00:12:49.402 --rc geninfo_all_blocks=1 00:12:49.402 --rc geninfo_unexecuted_blocks=1 00:12:49.402 00:12:49.402 ' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.402 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.403 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:57.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:57.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:57.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:57.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.552 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:12:57.553 00:12:57.553 --- 10.0.0.2 ping statistics --- 00:12:57.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.553 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:57.553 00:12:57.553 --- 10.0.0.1 ping statistics --- 00:12:57.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.553 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=503574 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 503574 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 503574 ']' 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.553 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.553 [2024-11-20 15:22:45.754003] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:12:57.553 [2024-11-20 15:22:45.754094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.553 [2024-11-20 15:22:45.857324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.553 [2024-11-20 15:22:45.910195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.553 [2024-11-20 15:22:45.910249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.553 [2024-11-20 15:22:45.910257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.553 [2024-11-20 15:22:45.910265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.553 [2024-11-20 15:22:45.910271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.553 [2024-11-20 15:22:45.912259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.553 [2024-11-20 15:22:45.912419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.553 [2024-11-20 15:22:45.912582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.553 [2024-11-20 15:22:45.912582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.815 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.816 [2024-11-20 15:22:46.628772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.816 [2024-11-20 15:22:46.706425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:57.816 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:02.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.295 rmmod nvme_tcp 00:13:16.295 rmmod nvme_fabrics 00:13:16.295 rmmod nvme_keyring 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 503574 ']' 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 503574 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 503574 ']' 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 503574 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 503574 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 503574' 00:13:16.295 killing process with pid 503574 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 503574 00:13:16.295 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 503574 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.557 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.467 00:13:18.467 real 0m29.462s 00:13:18.467 user 1m19.389s 00:13:18.467 sys 0m7.146s 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 ************************************ 00:13:18.467 END TEST nvmf_connect_disconnect 00:13:18.467 ************************************ 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.467 15:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.728 ************************************ 00:13:18.728 START TEST nvmf_multitarget 00:13:18.728 ************************************ 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:18.728 * Looking for test storage... 00:13:18.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.728 --rc genhtml_branch_coverage=1 00:13:18.728 --rc genhtml_function_coverage=1 00:13:18.728 --rc genhtml_legend=1 00:13:18.728 --rc geninfo_all_blocks=1 00:13:18.728 --rc geninfo_unexecuted_blocks=1 00:13:18.728 00:13:18.728 ' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.728 --rc genhtml_branch_coverage=1 00:13:18.728 --rc genhtml_function_coverage=1 00:13:18.728 --rc genhtml_legend=1 00:13:18.728 --rc geninfo_all_blocks=1 00:13:18.728 --rc geninfo_unexecuted_blocks=1 00:13:18.728 00:13:18.728 ' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.728 --rc genhtml_branch_coverage=1 00:13:18.728 --rc genhtml_function_coverage=1 00:13:18.728 --rc genhtml_legend=1 00:13:18.728 --rc geninfo_all_blocks=1 00:13:18.728 --rc geninfo_unexecuted_blocks=1 00:13:18.728 00:13:18.728 ' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.728 --rc genhtml_branch_coverage=1 00:13:18.728 --rc genhtml_function_coverage=1 00:13:18.728 --rc genhtml_legend=1 00:13:18.728 --rc geninfo_all_blocks=1 00:13:18.728 --rc geninfo_unexecuted_blocks=1 00:13:18.728 00:13:18.728 ' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.728 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.990 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:27.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:27.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:27.137 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:27.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:27.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.138 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:13:27.138 00:13:27.138 --- 10.0.0.2 ping statistics --- 00:13:27.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.138 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:13:27.138 00:13:27.138 --- 10.0.0.1 ping statistics --- 00:13:27.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.138 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=511700 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 511700 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 511700 ']' 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.138 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 [2024-11-20 15:23:15.271691] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:13:27.138 [2024-11-20 15:23:15.271762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.138 [2024-11-20 15:23:15.377061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.138 [2024-11-20 15:23:15.430373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.138 [2024-11-20 15:23:15.430424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.138 [2024-11-20 15:23:15.430433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.138 [2024-11-20 15:23:15.430440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.138 [2024-11-20 15:23:15.430447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.138 [2024-11-20 15:23:15.432846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.138 [2024-11-20 15:23:15.433005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.139 [2024-11-20 15:23:15.433472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.139 [2024-11-20 15:23:15.433572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:27.400 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:27.400 "nvmf_tgt_1" 00:13:27.661 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:27.661 "nvmf_tgt_2" 00:13:27.661 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:27.661 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:27.661 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:27.662 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:27.923 true 00:13:27.923 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:27.923 true 00:13:27.923 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:27.923 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.184 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.184 rmmod nvme_tcp 00:13:28.184 rmmod nvme_fabrics 00:13:28.184 rmmod nvme_keyring 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 511700 ']' 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 511700 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 511700 ']' 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 511700 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511700 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.184 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511700' 00:13:28.184 killing process with pid 511700 00:13:28.185 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 511700 00:13:28.185 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 511700 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.446 15:23:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.370 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.635 00:13:30.635 real 0m11.875s 00:13:30.635 user 0m10.299s 00:13:30.635 sys 0m6.172s 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 ************************************ 00:13:30.635 END TEST nvmf_multitarget 00:13:30.635 ************************************ 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 ************************************ 00:13:30.635 START TEST nvmf_rpc 00:13:30.635 ************************************ 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:30.635 * Looking for test storage... 00:13:30.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:30.635 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:30.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.899 --rc genhtml_branch_coverage=1 00:13:30.899 --rc genhtml_function_coverage=1 00:13:30.899 --rc genhtml_legend=1 00:13:30.899 --rc geninfo_all_blocks=1 00:13:30.899 --rc geninfo_unexecuted_blocks=1 00:13:30.899 00:13:30.899 ' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:30.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.899 --rc genhtml_branch_coverage=1 00:13:30.899 --rc genhtml_function_coverage=1 00:13:30.899 --rc genhtml_legend=1 00:13:30.899 --rc geninfo_all_blocks=1 00:13:30.899 --rc geninfo_unexecuted_blocks=1 00:13:30.899 00:13:30.899 ' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:30.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.899 --rc genhtml_branch_coverage=1 00:13:30.899 --rc genhtml_function_coverage=1 00:13:30.899 --rc genhtml_legend=1 00:13:30.899 --rc geninfo_all_blocks=1 00:13:30.899 --rc geninfo_unexecuted_blocks=1 00:13:30.899 00:13:30.899 ' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:30.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.899 --rc genhtml_branch_coverage=1 00:13:30.899 --rc genhtml_function_coverage=1 00:13:30.899 --rc genhtml_legend=1 00:13:30.899 --rc geninfo_all_blocks=1 00:13:30.899 --rc geninfo_unexecuted_blocks=1 00:13:30.899 00:13:30.899 ' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.899 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.900 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.049 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:39.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:39.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:39.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:39.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.050 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:13:39.050 00:13:39.050 --- 10.0.0.2 ping statistics --- 00:13:39.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.050 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:13:39.050 00:13:39.050 --- 10.0.0.1 ping statistics --- 00:13:39.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.050 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.050 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=516309 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 516309 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 516309 ']' 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.051 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.051 [2024-11-20 15:23:27.254517] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:13:39.051 [2024-11-20 15:23:27.254579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.051 [2024-11-20 15:23:27.356238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.051 [2024-11-20 15:23:27.409684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.051 [2024-11-20 15:23:27.409739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.051 [2024-11-20 15:23:27.409749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.051 [2024-11-20 15:23:27.409757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.051 [2024-11-20 15:23:27.409763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.051 [2024-11-20 15:23:27.411854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.051 [2024-11-20 15:23:27.412015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.051 [2024-11-20 15:23:27.412198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.051 [2024-11-20 15:23:27.412198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:39.313 "tick_rate": 2400000000, 00:13:39.313 "poll_groups": [ 00:13:39.313 { 00:13:39.313 "name": "nvmf_tgt_poll_group_000", 00:13:39.313 "admin_qpairs": 0, 00:13:39.313 "io_qpairs": 0, 00:13:39.313 "current_admin_qpairs": 0, 00:13:39.313 "current_io_qpairs": 0, 00:13:39.313 "pending_bdev_io": 0, 00:13:39.313 "completed_nvme_io": 0, 00:13:39.313 "transports": [] 00:13:39.313 }, 00:13:39.313 { 00:13:39.313 "name": "nvmf_tgt_poll_group_001", 00:13:39.313 "admin_qpairs": 0, 00:13:39.313 "io_qpairs": 0, 00:13:39.313 "current_admin_qpairs": 0, 00:13:39.313 "current_io_qpairs": 0, 00:13:39.313 "pending_bdev_io": 0, 00:13:39.313 "completed_nvme_io": 0, 00:13:39.313 "transports": [] 00:13:39.313 }, 00:13:39.313 { 00:13:39.313 "name": "nvmf_tgt_poll_group_002", 00:13:39.313 "admin_qpairs": 0, 00:13:39.313 "io_qpairs": 0, 00:13:39.313 "current_admin_qpairs": 0, 00:13:39.313 "current_io_qpairs": 0, 00:13:39.313 "pending_bdev_io": 0, 00:13:39.313 "completed_nvme_io": 0, 00:13:39.313 "transports": [] 00:13:39.313 }, 00:13:39.313 { 00:13:39.313 "name": "nvmf_tgt_poll_group_003", 00:13:39.313 "admin_qpairs": 0, 00:13:39.313 "io_qpairs": 0, 00:13:39.313 "current_admin_qpairs": 0, 00:13:39.313 "current_io_qpairs": 0, 00:13:39.313 "pending_bdev_io": 0, 00:13:39.313 "completed_nvme_io": 0, 00:13:39.313 "transports": [] 00:13:39.313 } 00:13:39.313 ] 00:13:39.313 }' 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.313 [2024-11-20 15:23:28.244920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.313 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:39.576 "tick_rate": 2400000000, 00:13:39.576 "poll_groups": [ 00:13:39.576 { 00:13:39.576 "name": "nvmf_tgt_poll_group_000", 00:13:39.576 "admin_qpairs": 0, 00:13:39.576 "io_qpairs": 0, 00:13:39.576 "current_admin_qpairs": 0, 00:13:39.576 "current_io_qpairs": 0, 00:13:39.576 "pending_bdev_io": 0, 00:13:39.576 "completed_nvme_io": 0, 00:13:39.576 "transports": [ 00:13:39.576 { 00:13:39.576 "trtype": "TCP" 00:13:39.576 } 00:13:39.576 ] 00:13:39.576 }, 00:13:39.576 { 00:13:39.576 "name": "nvmf_tgt_poll_group_001", 00:13:39.576 "admin_qpairs": 0, 00:13:39.576 "io_qpairs": 0, 00:13:39.576 "current_admin_qpairs": 0, 00:13:39.576 "current_io_qpairs": 0, 00:13:39.576 "pending_bdev_io": 0, 00:13:39.576 "completed_nvme_io": 0, 00:13:39.576 "transports": [ 00:13:39.576 { 00:13:39.576 "trtype": "TCP" 00:13:39.576 } 00:13:39.576 ] 00:13:39.576 }, 00:13:39.576 { 00:13:39.576 "name": "nvmf_tgt_poll_group_002", 00:13:39.576 "admin_qpairs": 0, 00:13:39.576 "io_qpairs": 0, 00:13:39.576 "current_admin_qpairs": 0, 00:13:39.576 "current_io_qpairs": 0, 00:13:39.576 "pending_bdev_io": 0, 00:13:39.576 "completed_nvme_io": 0, 00:13:39.576 "transports": [ 00:13:39.576 { 00:13:39.576 "trtype": "TCP" 00:13:39.576 } 00:13:39.576 ] 00:13:39.576 }, 00:13:39.576 { 00:13:39.576 "name": "nvmf_tgt_poll_group_003", 00:13:39.576 "admin_qpairs": 0, 00:13:39.576 "io_qpairs": 0, 00:13:39.576 "current_admin_qpairs": 0, 00:13:39.576 "current_io_qpairs": 0, 00:13:39.576 "pending_bdev_io": 0, 00:13:39.576 "completed_nvme_io": 0, 00:13:39.576 "transports": [ 00:13:39.576 { 00:13:39.576 "trtype": "TCP" 00:13:39.576 } 00:13:39.576 ] 00:13:39.576 } 00:13:39.576 ] 00:13:39.576 }' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.576 Malloc1 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.576 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 [2024-11-20 15:23:28.450957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:39.577 [2024-11-20 15:23:28.487962] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:39.577 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:39.577 could not add new controller: failed to write to nvme-fabrics device 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.577 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.491 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.491 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:41.491 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.491 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:41.491 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.404 [2024-11-20 15:23:32.265232] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:43.404 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:43.404 could not add new controller: failed to write to nvme-fabrics device 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.404 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.317 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.317 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:45.317 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.317 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:45.317 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.230 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 [2024-11-20 15:23:36.036390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.230 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.144 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.144 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:49.144 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.144 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:49.144 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.060 [2024-11-20 15:23:39.799350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.060 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.443 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.443 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:52.443 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.443 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:52.443 15:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.989 [2024-11-20 15:23:43.558064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.989 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.375 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.375 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.375 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.375 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:56.375 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:58.288 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 [2024-11-20 15:23:47.368612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.548 15:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.931 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.931 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.931 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.931 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:59.931 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:02.476 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.476 [2024-11-20 15:23:51.083769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.476 15:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:03.958 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:03.958 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:03.958 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.958 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:03.958 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.871 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:06.132 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.132 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:06.132 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.132 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.132 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.132 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 [2024-11-20 15:23:54.894510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 [2024-11-20 15:23:54.962636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 [2024-11-20 15:23:55.030802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.133 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 [2024-11-20 15:23:55.103015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 [2024-11-20 15:23:55.163209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:06.395 "tick_rate": 2400000000, 00:14:06.395 "poll_groups": [ 00:14:06.395 { 00:14:06.395 "name": "nvmf_tgt_poll_group_000", 00:14:06.395 "admin_qpairs": 0, 00:14:06.395 "io_qpairs": 224, 00:14:06.395 "current_admin_qpairs": 0, 00:14:06.395 "current_io_qpairs": 0, 00:14:06.395 "pending_bdev_io": 0, 00:14:06.395 "completed_nvme_io": 396, 00:14:06.395 "transports": [ 00:14:06.395 { 00:14:06.395 "trtype": "TCP" 00:14:06.395 } 00:14:06.395 ] 00:14:06.395 }, 00:14:06.395 { 00:14:06.395 "name": "nvmf_tgt_poll_group_001", 00:14:06.395 "admin_qpairs": 1, 00:14:06.395 "io_qpairs": 223, 00:14:06.395 "current_admin_qpairs": 0, 00:14:06.395 "current_io_qpairs": 0, 00:14:06.395 "pending_bdev_io": 0, 00:14:06.395 "completed_nvme_io": 238, 00:14:06.395 "transports": [ 00:14:06.395 { 00:14:06.395 "trtype": "TCP" 00:14:06.395 } 00:14:06.395 ] 00:14:06.395 }, 00:14:06.395 { 00:14:06.395 "name": "nvmf_tgt_poll_group_002", 00:14:06.395 "admin_qpairs": 6, 00:14:06.395 "io_qpairs": 218, 00:14:06.395 "current_admin_qpairs": 0, 00:14:06.395 "current_io_qpairs": 0, 00:14:06.395 "pending_bdev_io": 0, 00:14:06.395 "completed_nvme_io": 381, 00:14:06.395 "transports": [ 00:14:06.395 { 00:14:06.395 "trtype": "TCP" 00:14:06.395 } 00:14:06.395 ] 00:14:06.395 }, 00:14:06.395 { 00:14:06.395 "name": "nvmf_tgt_poll_group_003", 00:14:06.395 "admin_qpairs": 0, 00:14:06.395 "io_qpairs": 224, 00:14:06.395 "current_admin_qpairs": 0, 00:14:06.395 "current_io_qpairs": 0, 00:14:06.395 "pending_bdev_io": 0, 00:14:06.395 "completed_nvme_io": 224, 00:14:06.395 "transports": [ 00:14:06.395 { 00:14:06.395 "trtype": "TCP" 00:14:06.395 } 00:14:06.395 ] 00:14:06.395 } 00:14:06.395 ] 00:14:06.395 }' 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.395 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.396 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.396 rmmod nvme_tcp 00:14:06.657 rmmod nvme_fabrics 00:14:06.657 rmmod nvme_keyring 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 516309 ']' 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 516309 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 516309 ']' 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 516309 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516309 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516309' 00:14:06.657 killing process with pid 516309 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 516309 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 516309 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.657 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.203 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.203 00:14:09.203 real 0m38.257s 00:14:09.203 user 1m54.706s 00:14:09.203 sys 0m7.928s 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.204 ************************************ 00:14:09.204 END TEST nvmf_rpc 00:14:09.204 ************************************ 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.204 ************************************ 00:14:09.204 START TEST nvmf_invalid 00:14:09.204 ************************************ 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:09.204 * Looking for test storage... 00:14:09.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.204 --rc genhtml_branch_coverage=1 00:14:09.204 --rc genhtml_function_coverage=1 00:14:09.204 --rc genhtml_legend=1 00:14:09.204 --rc geninfo_all_blocks=1 00:14:09.204 --rc geninfo_unexecuted_blocks=1 00:14:09.204 00:14:09.204 ' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.204 --rc genhtml_branch_coverage=1 00:14:09.204 --rc genhtml_function_coverage=1 00:14:09.204 --rc genhtml_legend=1 00:14:09.204 --rc geninfo_all_blocks=1 00:14:09.204 --rc geninfo_unexecuted_blocks=1 00:14:09.204 00:14:09.204 ' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.204 --rc genhtml_branch_coverage=1 00:14:09.204 --rc genhtml_function_coverage=1 00:14:09.204 --rc genhtml_legend=1 00:14:09.204 --rc geninfo_all_blocks=1 00:14:09.204 --rc geninfo_unexecuted_blocks=1 00:14:09.204 00:14:09.204 ' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.204 --rc genhtml_branch_coverage=1 00:14:09.204 --rc genhtml_function_coverage=1 00:14:09.204 --rc genhtml_legend=1 00:14:09.204 --rc geninfo_all_blocks=1 00:14:09.204 --rc geninfo_unexecuted_blocks=1 00:14:09.204 00:14:09.204 ' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.204 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.205 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:17.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:17.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:17.346 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:17.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:17.347 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:17.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:14:17.347 00:14:17.347 --- 10.0.0.2 ping statistics --- 00:14:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.347 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:14:17.347 00:14:17.347 --- 10.0.0.1 ping statistics --- 00:14:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.347 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=526234 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 526234 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 526234 ']' 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.347 15:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.347 [2024-11-20 15:24:05.576855] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:14:17.347 [2024-11-20 15:24:05.576922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.347 [2024-11-20 15:24:05.675898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.347 [2024-11-20 15:24:05.729843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.347 [2024-11-20 15:24:05.729895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.347 [2024-11-20 15:24:05.729906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.347 [2024-11-20 15:24:05.729914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.347 [2024-11-20 15:24:05.729920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.347 [2024-11-20 15:24:05.731990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.347 [2024-11-20 15:24:05.732156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.347 [2024-11-20 15:24:05.732320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.347 [2024-11-20 15:24:05.732506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:17.609 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31062 00:14:17.870 [2024-11-20 15:24:06.617384] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:17.870 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:17.870 { 00:14:17.870 "nqn": "nqn.2016-06.io.spdk:cnode31062", 00:14:17.870 "tgt_name": "foobar", 00:14:17.870 "method": "nvmf_create_subsystem", 00:14:17.870 "req_id": 1 00:14:17.870 } 00:14:17.870 Got JSON-RPC error response 00:14:17.870 response: 00:14:17.870 { 00:14:17.870 "code": -32603, 00:14:17.870 "message": "Unable to find target foobar" 00:14:17.870 }' 00:14:17.870 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:17.870 { 00:14:17.870 "nqn": "nqn.2016-06.io.spdk:cnode31062", 00:14:17.870 "tgt_name": "foobar", 00:14:17.870 "method": "nvmf_create_subsystem", 00:14:17.870 "req_id": 1 00:14:17.870 } 00:14:17.870 Got JSON-RPC error response 00:14:17.870 response: 00:14:17.870 { 00:14:17.870 "code": -32603, 00:14:17.870 "message": "Unable to find target foobar" 00:14:17.870 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:17.870 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:17.870 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15667 00:14:17.870 [2024-11-20 15:24:06.826266] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15667: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:18.131 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:18.131 { 00:14:18.131 "nqn": "nqn.2016-06.io.spdk:cnode15667", 00:14:18.131 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:18.131 "method": "nvmf_create_subsystem", 00:14:18.131 "req_id": 1 00:14:18.131 } 00:14:18.131 Got JSON-RPC error response 00:14:18.131 response: 00:14:18.131 { 00:14:18.131 "code": -32602, 00:14:18.131 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:18.131 }' 00:14:18.131 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:18.131 { 00:14:18.131 "nqn": "nqn.2016-06.io.spdk:cnode15667", 00:14:18.131 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:18.131 "method": "nvmf_create_subsystem", 00:14:18.131 "req_id": 1 00:14:18.131 } 00:14:18.131 Got JSON-RPC error response 00:14:18.131 response: 00:14:18.131 { 00:14:18.131 "code": -32602, 00:14:18.131 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:18.131 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.131 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:18.131 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13856 00:14:18.131 [2024-11-20 15:24:07.034950] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13856: invalid model number 'SPDK_Controller' 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:18.131 { 00:14:18.131 "nqn": "nqn.2016-06.io.spdk:cnode13856", 00:14:18.131 "model_number": "SPDK_Controller\u001f", 00:14:18.131 "method": "nvmf_create_subsystem", 00:14:18.131 "req_id": 1 00:14:18.131 } 00:14:18.131 Got JSON-RPC error response 00:14:18.131 response: 00:14:18.131 { 00:14:18.131 "code": -32602, 00:14:18.131 "message": "Invalid MN SPDK_Controller\u001f" 00:14:18.131 }' 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:18.131 { 00:14:18.131 "nqn": "nqn.2016-06.io.spdk:cnode13856", 00:14:18.131 "model_number": "SPDK_Controller\u001f", 00:14:18.131 "method": "nvmf_create_subsystem", 00:14:18.131 "req_id": 1 00:14:18.131 } 00:14:18.131 Got JSON-RPC error response 00:14:18.131 response: 00:14:18.131 { 00:14:18.131 "code": -32602, 00:14:18.131 "message": "Invalid MN SPDK_Controller\u001f" 00:14:18.131 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.131 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:18.393 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1(pKq/XE`0l9kM8&'\''gl{a' 00:14:18.394 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '1(pKq/XE`0l9kM8&'\''gl{a' nqn.2016-06.io.spdk:cnode16651 00:14:18.657 [2024-11-20 15:24:07.420414] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16651: invalid serial number '1(pKq/XE`0l9kM8&'gl{a' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:18.657 { 00:14:18.657 "nqn": "nqn.2016-06.io.spdk:cnode16651", 00:14:18.657 "serial_number": "1(pKq/XE`0l9kM8&'\''gl{a", 00:14:18.657 "method": "nvmf_create_subsystem", 00:14:18.657 "req_id": 1 00:14:18.657 } 00:14:18.657 Got JSON-RPC error response 00:14:18.657 response: 00:14:18.657 { 00:14:18.657 "code": -32602, 00:14:18.657 "message": "Invalid SN 1(pKq/XE`0l9kM8&'\''gl{a" 00:14:18.657 }' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:18.657 { 00:14:18.657 "nqn": "nqn.2016-06.io.spdk:cnode16651", 00:14:18.657 "serial_number": "1(pKq/XE`0l9kM8&'gl{a", 00:14:18.657 "method": "nvmf_create_subsystem", 00:14:18.657 "req_id": 1 00:14:18.657 } 00:14:18.657 Got JSON-RPC error response 00:14:18.657 response: 00:14:18.657 { 00:14:18.657 "code": -32602, 00:14:18.657 "message": "Invalid SN 1(pKq/XE`0l9kM8&'gl{a" 00:14:18.657 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:18.657 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.658 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.920 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b1Xz0MU<\Yrx#!OmW]&Q!1rkP{lt2).8\iJ8L/.Q?' 00:14:18.921 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'b1Xz0MU<\Yrx#!OmW]&Q!1rkP{lt2).8\iJ8L/.Q?' nqn.2016-06.io.spdk:cnode2701 00:14:19.183 [2024-11-20 15:24:07.946305] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2701: invalid model number 'b1Xz0MU<\Yrx#!OmW]&Q!1rkP{lt2).8\iJ8L/.Q?' 00:14:19.183 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:19.183 { 00:14:19.183 "nqn": "nqn.2016-06.io.spdk:cnode2701", 00:14:19.183 "model_number": "b1Xz0MU<\\Yrx#!OmW]&Q!1rkP{lt2).8\\iJ8L/.Q?", 00:14:19.183 "method": "nvmf_create_subsystem", 00:14:19.183 "req_id": 1 00:14:19.183 } 00:14:19.183 Got JSON-RPC error response 00:14:19.183 response: 00:14:19.183 { 00:14:19.183 "code": -32602, 00:14:19.183 "message": "Invalid MN b1Xz0MU<\\Yrx#!OmW]&Q!1rkP{lt2).8\\iJ8L/.Q?" 00:14:19.183 }' 00:14:19.183 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:19.183 { 00:14:19.183 "nqn": "nqn.2016-06.io.spdk:cnode2701", 00:14:19.183 "model_number": "b1Xz0MU<\\Yrx#!OmW]&Q!1rkP{lt2).8\\iJ8L/.Q?", 00:14:19.183 "method": "nvmf_create_subsystem", 00:14:19.183 "req_id": 1 00:14:19.183 } 00:14:19.183 Got JSON-RPC error response 00:14:19.183 response: 00:14:19.183 { 00:14:19.183 "code": -32602, 00:14:19.183 "message": "Invalid MN b1Xz0MU<\\Yrx#!OmW]&Q!1rkP{lt2).8\\iJ8L/.Q?" 00:14:19.183 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:19.183 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:19.183 [2024-11-20 15:24:08.134985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.444 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:19.444 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:19.444 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:19.444 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:19.444 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:19.444 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:19.705 [2024-11-20 15:24:08.517582] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:19.705 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:19.705 { 00:14:19.705 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:19.705 "listen_address": { 00:14:19.705 "trtype": "tcp", 00:14:19.705 "traddr": "", 00:14:19.705 "trsvcid": "4421" 00:14:19.705 }, 00:14:19.705 "method": "nvmf_subsystem_remove_listener", 00:14:19.705 "req_id": 1 00:14:19.705 } 00:14:19.705 Got JSON-RPC error response 00:14:19.705 response: 00:14:19.705 { 00:14:19.705 "code": -32602, 00:14:19.705 "message": "Invalid parameters" 00:14:19.705 }' 00:14:19.705 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:19.705 { 00:14:19.705 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:19.705 "listen_address": { 00:14:19.705 "trtype": "tcp", 00:14:19.705 "traddr": "", 00:14:19.705 "trsvcid": "4421" 00:14:19.705 }, 00:14:19.705 "method": "nvmf_subsystem_remove_listener", 00:14:19.705 "req_id": 1 00:14:19.705 } 00:14:19.705 Got JSON-RPC error response 00:14:19.705 response: 00:14:19.705 { 00:14:19.705 "code": -32602, 00:14:19.705 "message": "Invalid parameters" 00:14:19.705 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:19.705 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27022 -i 0 00:14:19.967 [2024-11-20 15:24:08.706170] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27022: invalid cntlid range [0-65519] 00:14:19.967 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:19.967 { 00:14:19.967 "nqn": "nqn.2016-06.io.spdk:cnode27022", 00:14:19.967 "min_cntlid": 0, 00:14:19.967 "method": "nvmf_create_subsystem", 00:14:19.967 "req_id": 1 00:14:19.967 } 00:14:19.967 Got JSON-RPC error response 00:14:19.967 response: 00:14:19.967 { 00:14:19.967 "code": -32602, 00:14:19.967 "message": "Invalid cntlid range [0-65519]" 00:14:19.967 }' 00:14:19.967 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:19.967 { 00:14:19.967 "nqn": "nqn.2016-06.io.spdk:cnode27022", 00:14:19.967 "min_cntlid": 0, 00:14:19.967 "method": "nvmf_create_subsystem", 00:14:19.967 "req_id": 1 00:14:19.967 } 00:14:19.967 Got JSON-RPC error response 00:14:19.967 response: 00:14:19.967 { 00:14:19.967 "code": -32602, 00:14:19.967 "message": "Invalid cntlid range [0-65519]" 00:14:19.967 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.967 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9840 -i 65520 00:14:19.967 [2024-11-20 15:24:08.894800] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9840: invalid cntlid range [65520-65519] 00:14:20.230 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:20.230 { 00:14:20.230 "nqn": "nqn.2016-06.io.spdk:cnode9840", 00:14:20.230 "min_cntlid": 65520, 00:14:20.230 "method": "nvmf_create_subsystem", 00:14:20.230 "req_id": 1 00:14:20.230 } 00:14:20.230 Got JSON-RPC error response 00:14:20.230 response: 00:14:20.230 { 00:14:20.230 "code": -32602, 00:14:20.230 "message": "Invalid cntlid range [65520-65519]" 00:14:20.230 }' 00:14:20.230 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:20.230 { 00:14:20.230 "nqn": "nqn.2016-06.io.spdk:cnode9840", 00:14:20.230 "min_cntlid": 65520, 00:14:20.230 "method": "nvmf_create_subsystem", 00:14:20.230 "req_id": 1 00:14:20.230 } 00:14:20.230 Got JSON-RPC error response 00:14:20.230 response: 00:14:20.230 { 00:14:20.230 "code": -32602, 00:14:20.230 "message": "Invalid cntlid range [65520-65519]" 00:14:20.230 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.230 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22166 -I 0 00:14:20.230 [2024-11-20 15:24:09.083363] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22166: invalid cntlid range [1-0] 00:14:20.230 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:20.230 { 00:14:20.230 "nqn": "nqn.2016-06.io.spdk:cnode22166", 00:14:20.230 "max_cntlid": 0, 00:14:20.230 "method": "nvmf_create_subsystem", 00:14:20.230 "req_id": 1 00:14:20.230 } 00:14:20.230 Got JSON-RPC error response 00:14:20.230 response: 00:14:20.230 { 00:14:20.230 "code": -32602, 00:14:20.230 "message": "Invalid cntlid range [1-0]" 00:14:20.230 }' 00:14:20.230 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:20.230 { 00:14:20.230 "nqn": "nqn.2016-06.io.spdk:cnode22166", 00:14:20.230 "max_cntlid": 0, 00:14:20.230 "method": "nvmf_create_subsystem", 00:14:20.230 "req_id": 1 00:14:20.230 } 00:14:20.230 Got JSON-RPC error response 00:14:20.230 response: 00:14:20.230 { 00:14:20.230 "code": -32602, 00:14:20.230 "message": "Invalid cntlid range [1-0]" 00:14:20.230 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.230 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17546 -I 65520 00:14:20.492 [2024-11-20 15:24:09.271954] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17546: invalid cntlid range [1-65520] 00:14:20.492 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:20.492 { 00:14:20.492 "nqn": "nqn.2016-06.io.spdk:cnode17546", 00:14:20.492 "max_cntlid": 65520, 00:14:20.492 "method": "nvmf_create_subsystem", 00:14:20.492 "req_id": 1 00:14:20.492 } 00:14:20.492 Got JSON-RPC error response 00:14:20.492 response: 00:14:20.492 { 00:14:20.492 "code": -32602, 00:14:20.492 "message": "Invalid cntlid range [1-65520]" 00:14:20.492 }' 00:14:20.492 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:20.492 { 00:14:20.492 "nqn": "nqn.2016-06.io.spdk:cnode17546", 00:14:20.492 "max_cntlid": 65520, 00:14:20.492 "method": "nvmf_create_subsystem", 00:14:20.492 "req_id": 1 00:14:20.492 } 00:14:20.492 Got JSON-RPC error response 00:14:20.492 response: 00:14:20.492 { 00:14:20.492 "code": -32602, 00:14:20.492 "message": "Invalid cntlid range [1-65520]" 00:14:20.492 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.492 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27868 -i 6 -I 5 00:14:20.753 [2024-11-20 15:24:09.460557] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27868: invalid cntlid range [6-5] 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:20.753 { 00:14:20.753 "nqn": "nqn.2016-06.io.spdk:cnode27868", 00:14:20.753 "min_cntlid": 6, 00:14:20.753 "max_cntlid": 5, 00:14:20.753 "method": "nvmf_create_subsystem", 00:14:20.753 "req_id": 1 00:14:20.753 } 00:14:20.753 Got JSON-RPC error response 00:14:20.753 response: 00:14:20.753 { 00:14:20.753 "code": -32602, 00:14:20.753 "message": "Invalid cntlid range [6-5]" 00:14:20.753 }' 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:20.753 { 00:14:20.753 "nqn": "nqn.2016-06.io.spdk:cnode27868", 00:14:20.753 "min_cntlid": 6, 00:14:20.753 "max_cntlid": 5, 00:14:20.753 "method": "nvmf_create_subsystem", 00:14:20.753 "req_id": 1 00:14:20.753 } 00:14:20.753 Got JSON-RPC error response 00:14:20.753 response: 00:14:20.753 { 00:14:20.753 "code": -32602, 00:14:20.753 "message": "Invalid cntlid range [6-5]" 00:14:20.753 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:20.753 { 00:14:20.753 "name": "foobar", 00:14:20.753 "method": "nvmf_delete_target", 00:14:20.753 "req_id": 1 00:14:20.753 } 00:14:20.753 Got JSON-RPC error response 00:14:20.753 response: 00:14:20.753 { 00:14:20.753 "code": -32602, 00:14:20.753 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:20.753 }' 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:20.753 { 00:14:20.753 "name": "foobar", 00:14:20.753 "method": "nvmf_delete_target", 00:14:20.753 "req_id": 1 00:14:20.753 } 00:14:20.753 Got JSON-RPC error response 00:14:20.753 response: 00:14:20.753 { 00:14:20.753 "code": -32602, 00:14:20.753 "message": "The specified target doesn't exist, cannot delete it." 00:14:20.753 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.753 rmmod nvme_tcp 00:14:20.753 rmmod nvme_fabrics 00:14:20.753 rmmod nvme_keyring 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 526234 ']' 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 526234 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 526234 ']' 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 526234 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.753 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 526234 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 526234' 00:14:21.014 killing process with pid 526234 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 526234 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 526234 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.014 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:23.022 00:14:23.022 real 0m14.186s 00:14:23.022 user 0m21.054s 00:14:23.022 sys 0m6.798s 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:23.022 ************************************ 00:14:23.022 END TEST nvmf_invalid 00:14:23.022 ************************************ 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.022 15:24:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.285 ************************************ 00:14:23.285 START TEST nvmf_connect_stress 00:14:23.285 ************************************ 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:23.285 * Looking for test storage... 00:14:23.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:23.285 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.286 --rc genhtml_branch_coverage=1 00:14:23.286 --rc genhtml_function_coverage=1 00:14:23.286 --rc genhtml_legend=1 00:14:23.286 --rc geninfo_all_blocks=1 00:14:23.286 --rc geninfo_unexecuted_blocks=1 00:14:23.286 00:14:23.286 ' 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.286 --rc genhtml_branch_coverage=1 00:14:23.286 --rc genhtml_function_coverage=1 00:14:23.286 --rc genhtml_legend=1 00:14:23.286 --rc geninfo_all_blocks=1 00:14:23.286 --rc geninfo_unexecuted_blocks=1 00:14:23.286 00:14:23.286 ' 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.286 --rc genhtml_branch_coverage=1 00:14:23.286 --rc genhtml_function_coverage=1 00:14:23.286 --rc genhtml_legend=1 00:14:23.286 --rc geninfo_all_blocks=1 00:14:23.286 --rc geninfo_unexecuted_blocks=1 00:14:23.286 00:14:23.286 ' 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:23.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.286 --rc genhtml_branch_coverage=1 00:14:23.286 --rc genhtml_function_coverage=1 00:14:23.286 --rc genhtml_legend=1 00:14:23.286 --rc geninfo_all_blocks=1 00:14:23.286 --rc geninfo_unexecuted_blocks=1 00:14:23.286 00:14:23.286 ' 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.286 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.548 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.549 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.696 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:31.697 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:31.697 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:31.697 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:31.697 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:31.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:14:31.697 00:14:31.697 --- 10.0.0.2 ping statistics --- 00:14:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.697 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:14:31.697 00:14:31.697 --- 10.0.0.1 ping statistics --- 00:14:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.697 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.697 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=531931 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 531931 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 531931 ']' 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.698 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.698 [2024-11-20 15:24:19.846787] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:14:31.698 [2024-11-20 15:24:19.846857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.698 [2024-11-20 15:24:19.946323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.698 [2024-11-20 15:24:19.998073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.698 [2024-11-20 15:24:19.998125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.698 [2024-11-20 15:24:19.998135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.698 [2024-11-20 15:24:19.998142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.698 [2024-11-20 15:24:19.998149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.698 [2024-11-20 15:24:19.999968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.698 [2024-11-20 15:24:20.000130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.698 [2024-11-20 15:24:20.000130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.959 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.959 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:31.959 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.960 [2024-11-20 15:24:20.722320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.960 [2024-11-20 15:24:20.747941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.960 NULL1 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=532062 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.960 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.534 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.534 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:32.534 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.534 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.534 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.795 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.795 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:32.795 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.795 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.795 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.057 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.057 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:33.057 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.057 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.057 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.318 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.318 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:33.318 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.318 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.318 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.579 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.579 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:33.579 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.579 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.579 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.152 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.152 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:34.152 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.152 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.152 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.413 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:34.413 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.413 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.413 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.675 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.675 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:34.675 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.675 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.675 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.936 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.936 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:34.936 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.936 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.936 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.197 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.197 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:35.197 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.197 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.197 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.770 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.770 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:35.770 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.770 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.770 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.031 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.031 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:36.031 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.031 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.031 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.291 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.291 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:36.291 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.291 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.291 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.551 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.551 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:36.551 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.551 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.551 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.811 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.811 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:36.811 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.811 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.811 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.382 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.382 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:37.382 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.382 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.382 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.642 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.642 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:37.642 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.642 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.642 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.903 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.903 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:37.903 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.903 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.903 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.163 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.163 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:38.163 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.163 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.163 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.424 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.683 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:38.683 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.683 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.683 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.945 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.945 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:38.945 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.945 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.945 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.207 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.207 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:39.207 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.207 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.207 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.468 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.468 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:39.468 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.468 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.468 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.039 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.039 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:40.039 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.039 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.039 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.300 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.300 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:40.300 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.300 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.300 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.561 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.561 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:40.561 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.561 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.561 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.822 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.822 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:40.822 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.822 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.822 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.082 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.083 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:41.083 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.083 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.083 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.653 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.653 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:41.653 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.653 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.653 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.914 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.914 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:41.914 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.914 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.914 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.175 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.175 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:42.175 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.175 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.175 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.175 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 532062 00:14:42.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (532062) - No such process 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 532062 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.436 rmmod nvme_tcp 00:14:42.436 rmmod nvme_fabrics 00:14:42.436 rmmod nvme_keyring 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 531931 ']' 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 531931 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 531931 ']' 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 531931 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.436 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531931 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531931' 00:14:42.697 killing process with pid 531931 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 531931 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 531931 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.697 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:45.243 00:14:45.243 real 0m21.618s 00:14:45.243 user 0m43.233s 00:14:45.243 sys 0m9.388s 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.243 ************************************ 00:14:45.243 END TEST nvmf_connect_stress 00:14:45.243 ************************************ 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.243 ************************************ 00:14:45.243 START TEST nvmf_fused_ordering 00:14:45.243 ************************************ 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:45.243 * Looking for test storage... 00:14:45.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:45.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.243 --rc genhtml_branch_coverage=1 00:14:45.243 --rc genhtml_function_coverage=1 00:14:45.243 --rc genhtml_legend=1 00:14:45.243 --rc geninfo_all_blocks=1 00:14:45.243 --rc geninfo_unexecuted_blocks=1 00:14:45.243 00:14:45.243 ' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:45.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.243 --rc genhtml_branch_coverage=1 00:14:45.243 --rc genhtml_function_coverage=1 00:14:45.243 --rc genhtml_legend=1 00:14:45.243 --rc geninfo_all_blocks=1 00:14:45.243 --rc geninfo_unexecuted_blocks=1 00:14:45.243 00:14:45.243 ' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:45.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.243 --rc genhtml_branch_coverage=1 00:14:45.243 --rc genhtml_function_coverage=1 00:14:45.243 --rc genhtml_legend=1 00:14:45.243 --rc geninfo_all_blocks=1 00:14:45.243 --rc geninfo_unexecuted_blocks=1 00:14:45.243 00:14:45.243 ' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:45.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.243 --rc genhtml_branch_coverage=1 00:14:45.243 --rc genhtml_function_coverage=1 00:14:45.243 --rc genhtml_legend=1 00:14:45.243 --rc geninfo_all_blocks=1 00:14:45.243 --rc geninfo_unexecuted_blocks=1 00:14:45.243 00:14:45.243 ' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.243 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:45.244 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:53.382 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:53.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:53.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:53.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:53.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:53.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:14:53.383 00:14:53.383 --- 10.0.0.2 ping statistics --- 00:14:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.383 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:14:53.383 00:14:53.383 --- 10.0.0.1 ping statistics --- 00:14:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.383 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.383 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=538418 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 538418 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 538418 ']' 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.384 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.384 [2024-11-20 15:24:41.475240] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:14:53.384 [2024-11-20 15:24:41.475308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.384 [2024-11-20 15:24:41.575889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.384 [2024-11-20 15:24:41.625741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.384 [2024-11-20 15:24:41.625788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.384 [2024-11-20 15:24:41.625797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.384 [2024-11-20 15:24:41.625803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.384 [2024-11-20 15:24:41.625809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.384 [2024-11-20 15:24:41.626575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.384 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.384 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:53.384 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.384 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.384 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 [2024-11-20 15:24:42.356316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 [2024-11-20 15:24:42.380613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 NULL1 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.645 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:53.645 [2024-11-20 15:24:42.450211] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:14:53.645 [2024-11-20 15:24:42.450270] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid538567 ] 00:14:54.217 Attached to nqn.2016-06.io.spdk:cnode1 00:14:54.217 Namespace ID: 1 size: 1GB 00:14:54.217 fused_ordering(0) 00:14:54.217 fused_ordering(1) 00:14:54.217 fused_ordering(2) 00:14:54.217 fused_ordering(3) 00:14:54.217 fused_ordering(4) 00:14:54.217 fused_ordering(5) 00:14:54.217 fused_ordering(6) 00:14:54.217 fused_ordering(7) 00:14:54.217 fused_ordering(8) 00:14:54.217 fused_ordering(9) 00:14:54.217 fused_ordering(10) 00:14:54.217 fused_ordering(11) 00:14:54.217 fused_ordering(12) 00:14:54.217 fused_ordering(13) 00:14:54.217 fused_ordering(14) 00:14:54.217 fused_ordering(15) 00:14:54.217 fused_ordering(16) 00:14:54.217 fused_ordering(17) 00:14:54.217 fused_ordering(18) 00:14:54.217 fused_ordering(19) 00:14:54.217 fused_ordering(20) 00:14:54.217 fused_ordering(21) 00:14:54.217 fused_ordering(22) 00:14:54.217 fused_ordering(23) 00:14:54.217 fused_ordering(24) 00:14:54.217 fused_ordering(25) 00:14:54.217 fused_ordering(26) 00:14:54.217 fused_ordering(27) 00:14:54.217 fused_ordering(28) 00:14:54.217 fused_ordering(29) 00:14:54.217 fused_ordering(30) 00:14:54.217 fused_ordering(31) 00:14:54.217 fused_ordering(32) 00:14:54.217 fused_ordering(33) 00:14:54.217 fused_ordering(34) 00:14:54.217 fused_ordering(35) 00:14:54.217 fused_ordering(36) 00:14:54.217 fused_ordering(37) 00:14:54.217 fused_ordering(38) 00:14:54.217 fused_ordering(39) 00:14:54.217 fused_ordering(40) 00:14:54.217 fused_ordering(41) 00:14:54.217 fused_ordering(42) 00:14:54.217 fused_ordering(43) 00:14:54.217 fused_ordering(44) 00:14:54.217 fused_ordering(45) 00:14:54.217 fused_ordering(46) 00:14:54.217 fused_ordering(47) 00:14:54.217 fused_ordering(48) 00:14:54.217 fused_ordering(49) 00:14:54.217 fused_ordering(50) 00:14:54.217 fused_ordering(51) 00:14:54.217 fused_ordering(52) 00:14:54.217 fused_ordering(53) 00:14:54.217 fused_ordering(54) 00:14:54.217 fused_ordering(55) 00:14:54.217 fused_ordering(56) 00:14:54.217 fused_ordering(57) 00:14:54.217 fused_ordering(58) 00:14:54.217 fused_ordering(59) 00:14:54.217 fused_ordering(60) 00:14:54.217 fused_ordering(61) 00:14:54.217 fused_ordering(62) 00:14:54.217 fused_ordering(63) 00:14:54.217 fused_ordering(64) 00:14:54.218 fused_ordering(65) 00:14:54.218 fused_ordering(66) 00:14:54.218 fused_ordering(67) 00:14:54.218 fused_ordering(68) 00:14:54.218 fused_ordering(69) 00:14:54.218 fused_ordering(70) 00:14:54.218 fused_ordering(71) 00:14:54.218 fused_ordering(72) 00:14:54.218 fused_ordering(73) 00:14:54.218 fused_ordering(74) 00:14:54.218 fused_ordering(75) 00:14:54.218 fused_ordering(76) 00:14:54.218 fused_ordering(77) 00:14:54.218 fused_ordering(78) 00:14:54.218 fused_ordering(79) 00:14:54.218 fused_ordering(80) 00:14:54.218 fused_ordering(81) 00:14:54.218 fused_ordering(82) 00:14:54.218 fused_ordering(83) 00:14:54.218 fused_ordering(84) 00:14:54.218 fused_ordering(85) 00:14:54.218 fused_ordering(86) 00:14:54.218 fused_ordering(87) 00:14:54.218 fused_ordering(88) 00:14:54.218 fused_ordering(89) 00:14:54.218 fused_ordering(90) 00:14:54.218 fused_ordering(91) 00:14:54.218 fused_ordering(92) 00:14:54.218 fused_ordering(93) 00:14:54.218 fused_ordering(94) 00:14:54.218 fused_ordering(95) 00:14:54.218 fused_ordering(96) 00:14:54.218 fused_ordering(97) 00:14:54.218 fused_ordering(98) 00:14:54.218 fused_ordering(99) 00:14:54.218 fused_ordering(100) 00:14:54.218 fused_ordering(101) 00:14:54.218 fused_ordering(102) 00:14:54.218 fused_ordering(103) 00:14:54.218 fused_ordering(104) 00:14:54.218 fused_ordering(105) 00:14:54.218 fused_ordering(106) 00:14:54.218 fused_ordering(107) 00:14:54.218 fused_ordering(108) 00:14:54.218 fused_ordering(109) 00:14:54.218 fused_ordering(110) 00:14:54.218 fused_ordering(111) 00:14:54.218 fused_ordering(112) 00:14:54.218 fused_ordering(113) 00:14:54.218 fused_ordering(114) 00:14:54.218 fused_ordering(115) 00:14:54.218 fused_ordering(116) 00:14:54.218 fused_ordering(117) 00:14:54.218 fused_ordering(118) 00:14:54.218 fused_ordering(119) 00:14:54.218 fused_ordering(120) 00:14:54.218 fused_ordering(121) 00:14:54.218 fused_ordering(122) 00:14:54.218 fused_ordering(123) 00:14:54.218 fused_ordering(124) 00:14:54.218 fused_ordering(125) 00:14:54.218 fused_ordering(126) 00:14:54.218 fused_ordering(127) 00:14:54.218 fused_ordering(128) 00:14:54.218 fused_ordering(129) 00:14:54.218 fused_ordering(130) 00:14:54.218 fused_ordering(131) 00:14:54.218 fused_ordering(132) 00:14:54.218 fused_ordering(133) 00:14:54.218 fused_ordering(134) 00:14:54.218 fused_ordering(135) 00:14:54.218 fused_ordering(136) 00:14:54.218 fused_ordering(137) 00:14:54.218 fused_ordering(138) 00:14:54.218 fused_ordering(139) 00:14:54.218 fused_ordering(140) 00:14:54.218 fused_ordering(141) 00:14:54.218 fused_ordering(142) 00:14:54.218 fused_ordering(143) 00:14:54.218 fused_ordering(144) 00:14:54.218 fused_ordering(145) 00:14:54.218 fused_ordering(146) 00:14:54.218 fused_ordering(147) 00:14:54.218 fused_ordering(148) 00:14:54.218 fused_ordering(149) 00:14:54.218 fused_ordering(150) 00:14:54.218 fused_ordering(151) 00:14:54.218 fused_ordering(152) 00:14:54.218 fused_ordering(153) 00:14:54.218 fused_ordering(154) 00:14:54.218 fused_ordering(155) 00:14:54.218 fused_ordering(156) 00:14:54.218 fused_ordering(157) 00:14:54.218 fused_ordering(158) 00:14:54.218 fused_ordering(159) 00:14:54.218 fused_ordering(160) 00:14:54.218 fused_ordering(161) 00:14:54.218 fused_ordering(162) 00:14:54.218 fused_ordering(163) 00:14:54.218 fused_ordering(164) 00:14:54.218 fused_ordering(165) 00:14:54.218 fused_ordering(166) 00:14:54.218 fused_ordering(167) 00:14:54.218 fused_ordering(168) 00:14:54.218 fused_ordering(169) 00:14:54.218 fused_ordering(170) 00:14:54.218 fused_ordering(171) 00:14:54.218 fused_ordering(172) 00:14:54.218 fused_ordering(173) 00:14:54.218 fused_ordering(174) 00:14:54.218 fused_ordering(175) 00:14:54.218 fused_ordering(176) 00:14:54.218 fused_ordering(177) 00:14:54.218 fused_ordering(178) 00:14:54.218 fused_ordering(179) 00:14:54.218 fused_ordering(180) 00:14:54.218 fused_ordering(181) 00:14:54.218 fused_ordering(182) 00:14:54.218 fused_ordering(183) 00:14:54.218 fused_ordering(184) 00:14:54.218 fused_ordering(185) 00:14:54.218 fused_ordering(186) 00:14:54.218 fused_ordering(187) 00:14:54.218 fused_ordering(188) 00:14:54.218 fused_ordering(189) 00:14:54.218 fused_ordering(190) 00:14:54.218 fused_ordering(191) 00:14:54.218 fused_ordering(192) 00:14:54.218 fused_ordering(193) 00:14:54.218 fused_ordering(194) 00:14:54.218 fused_ordering(195) 00:14:54.218 fused_ordering(196) 00:14:54.218 fused_ordering(197) 00:14:54.218 fused_ordering(198) 00:14:54.218 fused_ordering(199) 00:14:54.218 fused_ordering(200) 00:14:54.218 fused_ordering(201) 00:14:54.218 fused_ordering(202) 00:14:54.218 fused_ordering(203) 00:14:54.218 fused_ordering(204) 00:14:54.218 fused_ordering(205) 00:14:54.479 fused_ordering(206) 00:14:54.479 fused_ordering(207) 00:14:54.479 fused_ordering(208) 00:14:54.479 fused_ordering(209) 00:14:54.479 fused_ordering(210) 00:14:54.479 fused_ordering(211) 00:14:54.479 fused_ordering(212) 00:14:54.479 fused_ordering(213) 00:14:54.479 fused_ordering(214) 00:14:54.479 fused_ordering(215) 00:14:54.479 fused_ordering(216) 00:14:54.479 fused_ordering(217) 00:14:54.479 fused_ordering(218) 00:14:54.479 fused_ordering(219) 00:14:54.479 fused_ordering(220) 00:14:54.479 fused_ordering(221) 00:14:54.479 fused_ordering(222) 00:14:54.479 fused_ordering(223) 00:14:54.479 fused_ordering(224) 00:14:54.479 fused_ordering(225) 00:14:54.479 fused_ordering(226) 00:14:54.479 fused_ordering(227) 00:14:54.479 fused_ordering(228) 00:14:54.479 fused_ordering(229) 00:14:54.479 fused_ordering(230) 00:14:54.479 fused_ordering(231) 00:14:54.479 fused_ordering(232) 00:14:54.479 fused_ordering(233) 00:14:54.479 fused_ordering(234) 00:14:54.479 fused_ordering(235) 00:14:54.479 fused_ordering(236) 00:14:54.479 fused_ordering(237) 00:14:54.479 fused_ordering(238) 00:14:54.479 fused_ordering(239) 00:14:54.479 fused_ordering(240) 00:14:54.479 fused_ordering(241) 00:14:54.479 fused_ordering(242) 00:14:54.479 fused_ordering(243) 00:14:54.479 fused_ordering(244) 00:14:54.479 fused_ordering(245) 00:14:54.479 fused_ordering(246) 00:14:54.479 fused_ordering(247) 00:14:54.479 fused_ordering(248) 00:14:54.479 fused_ordering(249) 00:14:54.479 fused_ordering(250) 00:14:54.479 fused_ordering(251) 00:14:54.479 fused_ordering(252) 00:14:54.479 fused_ordering(253) 00:14:54.479 fused_ordering(254) 00:14:54.479 fused_ordering(255) 00:14:54.479 fused_ordering(256) 00:14:54.479 fused_ordering(257) 00:14:54.479 fused_ordering(258) 00:14:54.479 fused_ordering(259) 00:14:54.479 fused_ordering(260) 00:14:54.479 fused_ordering(261) 00:14:54.479 fused_ordering(262) 00:14:54.479 fused_ordering(263) 00:14:54.479 fused_ordering(264) 00:14:54.479 fused_ordering(265) 00:14:54.479 fused_ordering(266) 00:14:54.479 fused_ordering(267) 00:14:54.479 fused_ordering(268) 00:14:54.479 fused_ordering(269) 00:14:54.479 fused_ordering(270) 00:14:54.479 fused_ordering(271) 00:14:54.479 fused_ordering(272) 00:14:54.479 fused_ordering(273) 00:14:54.480 fused_ordering(274) 00:14:54.480 fused_ordering(275) 00:14:54.480 fused_ordering(276) 00:14:54.480 fused_ordering(277) 00:14:54.480 fused_ordering(278) 00:14:54.480 fused_ordering(279) 00:14:54.480 fused_ordering(280) 00:14:54.480 fused_ordering(281) 00:14:54.480 fused_ordering(282) 00:14:54.480 fused_ordering(283) 00:14:54.480 fused_ordering(284) 00:14:54.480 fused_ordering(285) 00:14:54.480 fused_ordering(286) 00:14:54.480 fused_ordering(287) 00:14:54.480 fused_ordering(288) 00:14:54.480 fused_ordering(289) 00:14:54.480 fused_ordering(290) 00:14:54.480 fused_ordering(291) 00:14:54.480 fused_ordering(292) 00:14:54.480 fused_ordering(293) 00:14:54.480 fused_ordering(294) 00:14:54.480 fused_ordering(295) 00:14:54.480 fused_ordering(296) 00:14:54.480 fused_ordering(297) 00:14:54.480 fused_ordering(298) 00:14:54.480 fused_ordering(299) 00:14:54.480 fused_ordering(300) 00:14:54.480 fused_ordering(301) 00:14:54.480 fused_ordering(302) 00:14:54.480 fused_ordering(303) 00:14:54.480 fused_ordering(304) 00:14:54.480 fused_ordering(305) 00:14:54.480 fused_ordering(306) 00:14:54.480 fused_ordering(307) 00:14:54.480 fused_ordering(308) 00:14:54.480 fused_ordering(309) 00:14:54.480 fused_ordering(310) 00:14:54.480 fused_ordering(311) 00:14:54.480 fused_ordering(312) 00:14:54.480 fused_ordering(313) 00:14:54.480 fused_ordering(314) 00:14:54.480 fused_ordering(315) 00:14:54.480 fused_ordering(316) 00:14:54.480 fused_ordering(317) 00:14:54.480 fused_ordering(318) 00:14:54.480 fused_ordering(319) 00:14:54.480 fused_ordering(320) 00:14:54.480 fused_ordering(321) 00:14:54.480 fused_ordering(322) 00:14:54.480 fused_ordering(323) 00:14:54.480 fused_ordering(324) 00:14:54.480 fused_ordering(325) 00:14:54.480 fused_ordering(326) 00:14:54.480 fused_ordering(327) 00:14:54.480 fused_ordering(328) 00:14:54.480 fused_ordering(329) 00:14:54.480 fused_ordering(330) 00:14:54.480 fused_ordering(331) 00:14:54.480 fused_ordering(332) 00:14:54.480 fused_ordering(333) 00:14:54.480 fused_ordering(334) 00:14:54.480 fused_ordering(335) 00:14:54.480 fused_ordering(336) 00:14:54.480 fused_ordering(337) 00:14:54.480 fused_ordering(338) 00:14:54.480 fused_ordering(339) 00:14:54.480 fused_ordering(340) 00:14:54.480 fused_ordering(341) 00:14:54.480 fused_ordering(342) 00:14:54.480 fused_ordering(343) 00:14:54.480 fused_ordering(344) 00:14:54.480 fused_ordering(345) 00:14:54.480 fused_ordering(346) 00:14:54.480 fused_ordering(347) 00:14:54.480 fused_ordering(348) 00:14:54.480 fused_ordering(349) 00:14:54.480 fused_ordering(350) 00:14:54.480 fused_ordering(351) 00:14:54.480 fused_ordering(352) 00:14:54.480 fused_ordering(353) 00:14:54.480 fused_ordering(354) 00:14:54.480 fused_ordering(355) 00:14:54.480 fused_ordering(356) 00:14:54.480 fused_ordering(357) 00:14:54.480 fused_ordering(358) 00:14:54.480 fused_ordering(359) 00:14:54.480 fused_ordering(360) 00:14:54.480 fused_ordering(361) 00:14:54.480 fused_ordering(362) 00:14:54.480 fused_ordering(363) 00:14:54.480 fused_ordering(364) 00:14:54.480 fused_ordering(365) 00:14:54.480 fused_ordering(366) 00:14:54.480 fused_ordering(367) 00:14:54.480 fused_ordering(368) 00:14:54.480 fused_ordering(369) 00:14:54.480 fused_ordering(370) 00:14:54.480 fused_ordering(371) 00:14:54.480 fused_ordering(372) 00:14:54.480 fused_ordering(373) 00:14:54.480 fused_ordering(374) 00:14:54.480 fused_ordering(375) 00:14:54.480 fused_ordering(376) 00:14:54.480 fused_ordering(377) 00:14:54.480 fused_ordering(378) 00:14:54.480 fused_ordering(379) 00:14:54.480 fused_ordering(380) 00:14:54.480 fused_ordering(381) 00:14:54.480 fused_ordering(382) 00:14:54.480 fused_ordering(383) 00:14:54.480 fused_ordering(384) 00:14:54.480 fused_ordering(385) 00:14:54.480 fused_ordering(386) 00:14:54.480 fused_ordering(387) 00:14:54.480 fused_ordering(388) 00:14:54.480 fused_ordering(389) 00:14:54.480 fused_ordering(390) 00:14:54.480 fused_ordering(391) 00:14:54.480 fused_ordering(392) 00:14:54.480 fused_ordering(393) 00:14:54.480 fused_ordering(394) 00:14:54.480 fused_ordering(395) 00:14:54.480 fused_ordering(396) 00:14:54.480 fused_ordering(397) 00:14:54.480 fused_ordering(398) 00:14:54.480 fused_ordering(399) 00:14:54.480 fused_ordering(400) 00:14:54.480 fused_ordering(401) 00:14:54.480 fused_ordering(402) 00:14:54.480 fused_ordering(403) 00:14:54.480 fused_ordering(404) 00:14:54.480 fused_ordering(405) 00:14:54.480 fused_ordering(406) 00:14:54.480 fused_ordering(407) 00:14:54.480 fused_ordering(408) 00:14:54.480 fused_ordering(409) 00:14:54.480 fused_ordering(410) 00:14:54.741 fused_ordering(411) 00:14:54.741 fused_ordering(412) 00:14:54.741 fused_ordering(413) 00:14:54.741 fused_ordering(414) 00:14:54.741 fused_ordering(415) 00:14:54.741 fused_ordering(416) 00:14:54.741 fused_ordering(417) 00:14:54.741 fused_ordering(418) 00:14:54.741 fused_ordering(419) 00:14:54.741 fused_ordering(420) 00:14:54.741 fused_ordering(421) 00:14:54.741 fused_ordering(422) 00:14:54.741 fused_ordering(423) 00:14:54.741 fused_ordering(424) 00:14:54.741 fused_ordering(425) 00:14:54.741 fused_ordering(426) 00:14:54.741 fused_ordering(427) 00:14:54.741 fused_ordering(428) 00:14:54.741 fused_ordering(429) 00:14:54.741 fused_ordering(430) 00:14:54.741 fused_ordering(431) 00:14:54.741 fused_ordering(432) 00:14:54.741 fused_ordering(433) 00:14:54.741 fused_ordering(434) 00:14:54.741 fused_ordering(435) 00:14:54.741 fused_ordering(436) 00:14:54.741 fused_ordering(437) 00:14:54.741 fused_ordering(438) 00:14:54.741 fused_ordering(439) 00:14:54.741 fused_ordering(440) 00:14:54.741 fused_ordering(441) 00:14:54.741 fused_ordering(442) 00:14:54.741 fused_ordering(443) 00:14:54.741 fused_ordering(444) 00:14:54.741 fused_ordering(445) 00:14:54.741 fused_ordering(446) 00:14:54.741 fused_ordering(447) 00:14:54.741 fused_ordering(448) 00:14:54.741 fused_ordering(449) 00:14:54.741 fused_ordering(450) 00:14:54.741 fused_ordering(451) 00:14:54.741 fused_ordering(452) 00:14:54.741 fused_ordering(453) 00:14:54.741 fused_ordering(454) 00:14:54.741 fused_ordering(455) 00:14:54.741 fused_ordering(456) 00:14:54.741 fused_ordering(457) 00:14:54.741 fused_ordering(458) 00:14:54.741 fused_ordering(459) 00:14:54.741 fused_ordering(460) 00:14:54.741 fused_ordering(461) 00:14:54.741 fused_ordering(462) 00:14:54.741 fused_ordering(463) 00:14:54.741 fused_ordering(464) 00:14:54.741 fused_ordering(465) 00:14:54.741 fused_ordering(466) 00:14:54.741 fused_ordering(467) 00:14:54.741 fused_ordering(468) 00:14:54.741 fused_ordering(469) 00:14:54.741 fused_ordering(470) 00:14:54.741 fused_ordering(471) 00:14:54.741 fused_ordering(472) 00:14:54.741 fused_ordering(473) 00:14:54.741 fused_ordering(474) 00:14:54.741 fused_ordering(475) 00:14:54.741 fused_ordering(476) 00:14:54.741 fused_ordering(477) 00:14:54.741 fused_ordering(478) 00:14:54.741 fused_ordering(479) 00:14:54.741 fused_ordering(480) 00:14:54.741 fused_ordering(481) 00:14:54.741 fused_ordering(482) 00:14:54.741 fused_ordering(483) 00:14:54.741 fused_ordering(484) 00:14:54.741 fused_ordering(485) 00:14:54.741 fused_ordering(486) 00:14:54.741 fused_ordering(487) 00:14:54.741 fused_ordering(488) 00:14:54.741 fused_ordering(489) 00:14:54.741 fused_ordering(490) 00:14:54.741 fused_ordering(491) 00:14:54.741 fused_ordering(492) 00:14:54.741 fused_ordering(493) 00:14:54.741 fused_ordering(494) 00:14:54.741 fused_ordering(495) 00:14:54.741 fused_ordering(496) 00:14:54.741 fused_ordering(497) 00:14:54.741 fused_ordering(498) 00:14:54.741 fused_ordering(499) 00:14:54.741 fused_ordering(500) 00:14:54.741 fused_ordering(501) 00:14:54.741 fused_ordering(502) 00:14:54.741 fused_ordering(503) 00:14:54.741 fused_ordering(504) 00:14:54.741 fused_ordering(505) 00:14:54.741 fused_ordering(506) 00:14:54.741 fused_ordering(507) 00:14:54.741 fused_ordering(508) 00:14:54.741 fused_ordering(509) 00:14:54.741 fused_ordering(510) 00:14:54.741 fused_ordering(511) 00:14:54.741 fused_ordering(512) 00:14:54.741 fused_ordering(513) 00:14:54.741 fused_ordering(514) 00:14:54.741 fused_ordering(515) 00:14:54.741 fused_ordering(516) 00:14:54.741 fused_ordering(517) 00:14:54.741 fused_ordering(518) 00:14:54.741 fused_ordering(519) 00:14:54.741 fused_ordering(520) 00:14:54.741 fused_ordering(521) 00:14:54.741 fused_ordering(522) 00:14:54.741 fused_ordering(523) 00:14:54.741 fused_ordering(524) 00:14:54.741 fused_ordering(525) 00:14:54.741 fused_ordering(526) 00:14:54.741 fused_ordering(527) 00:14:54.741 fused_ordering(528) 00:14:54.741 fused_ordering(529) 00:14:54.741 fused_ordering(530) 00:14:54.741 fused_ordering(531) 00:14:54.741 fused_ordering(532) 00:14:54.741 fused_ordering(533) 00:14:54.741 fused_ordering(534) 00:14:54.741 fused_ordering(535) 00:14:54.741 fused_ordering(536) 00:14:54.741 fused_ordering(537) 00:14:54.741 fused_ordering(538) 00:14:54.741 fused_ordering(539) 00:14:54.741 fused_ordering(540) 00:14:54.741 fused_ordering(541) 00:14:54.741 fused_ordering(542) 00:14:54.741 fused_ordering(543) 00:14:54.741 fused_ordering(544) 00:14:54.741 fused_ordering(545) 00:14:54.741 fused_ordering(546) 00:14:54.741 fused_ordering(547) 00:14:54.741 fused_ordering(548) 00:14:54.741 fused_ordering(549) 00:14:54.741 fused_ordering(550) 00:14:54.742 fused_ordering(551) 00:14:54.742 fused_ordering(552) 00:14:54.742 fused_ordering(553) 00:14:54.742 fused_ordering(554) 00:14:54.742 fused_ordering(555) 00:14:54.742 fused_ordering(556) 00:14:54.742 fused_ordering(557) 00:14:54.742 fused_ordering(558) 00:14:54.742 fused_ordering(559) 00:14:54.742 fused_ordering(560) 00:14:54.742 fused_ordering(561) 00:14:54.742 fused_ordering(562) 00:14:54.742 fused_ordering(563) 00:14:54.742 fused_ordering(564) 00:14:54.742 fused_ordering(565) 00:14:54.742 fused_ordering(566) 00:14:54.742 fused_ordering(567) 00:14:54.742 fused_ordering(568) 00:14:54.742 fused_ordering(569) 00:14:54.742 fused_ordering(570) 00:14:54.742 fused_ordering(571) 00:14:54.742 fused_ordering(572) 00:14:54.742 fused_ordering(573) 00:14:54.742 fused_ordering(574) 00:14:54.742 fused_ordering(575) 00:14:54.742 fused_ordering(576) 00:14:54.742 fused_ordering(577) 00:14:54.742 fused_ordering(578) 00:14:54.742 fused_ordering(579) 00:14:54.742 fused_ordering(580) 00:14:54.742 fused_ordering(581) 00:14:54.742 fused_ordering(582) 00:14:54.742 fused_ordering(583) 00:14:54.742 fused_ordering(584) 00:14:54.742 fused_ordering(585) 00:14:54.742 fused_ordering(586) 00:14:54.742 fused_ordering(587) 00:14:54.742 fused_ordering(588) 00:14:54.742 fused_ordering(589) 00:14:54.742 fused_ordering(590) 00:14:54.742 fused_ordering(591) 00:14:54.742 fused_ordering(592) 00:14:54.742 fused_ordering(593) 00:14:54.742 fused_ordering(594) 00:14:54.742 fused_ordering(595) 00:14:54.742 fused_ordering(596) 00:14:54.742 fused_ordering(597) 00:14:54.742 fused_ordering(598) 00:14:54.742 fused_ordering(599) 00:14:54.742 fused_ordering(600) 00:14:54.742 fused_ordering(601) 00:14:54.742 fused_ordering(602) 00:14:54.742 fused_ordering(603) 00:14:54.742 fused_ordering(604) 00:14:54.742 fused_ordering(605) 00:14:54.742 fused_ordering(606) 00:14:54.742 fused_ordering(607) 00:14:54.742 fused_ordering(608) 00:14:54.742 fused_ordering(609) 00:14:54.742 fused_ordering(610) 00:14:54.742 fused_ordering(611) 00:14:54.742 fused_ordering(612) 00:14:54.742 fused_ordering(613) 00:14:54.742 fused_ordering(614) 00:14:54.742 fused_ordering(615) 00:14:55.314 fused_ordering(616) 00:14:55.314 fused_ordering(617) 00:14:55.314 fused_ordering(618) 00:14:55.314 fused_ordering(619) 00:14:55.314 fused_ordering(620) 00:14:55.314 fused_ordering(621) 00:14:55.314 fused_ordering(622) 00:14:55.314 fused_ordering(623) 00:14:55.314 fused_ordering(624) 00:14:55.314 fused_ordering(625) 00:14:55.314 fused_ordering(626) 00:14:55.314 fused_ordering(627) 00:14:55.314 fused_ordering(628) 00:14:55.314 fused_ordering(629) 00:14:55.314 fused_ordering(630) 00:14:55.314 fused_ordering(631) 00:14:55.314 fused_ordering(632) 00:14:55.314 fused_ordering(633) 00:14:55.314 fused_ordering(634) 00:14:55.314 fused_ordering(635) 00:14:55.314 fused_ordering(636) 00:14:55.314 fused_ordering(637) 00:14:55.314 fused_ordering(638) 00:14:55.314 fused_ordering(639) 00:14:55.314 fused_ordering(640) 00:14:55.314 fused_ordering(641) 00:14:55.314 fused_ordering(642) 00:14:55.314 fused_ordering(643) 00:14:55.314 fused_ordering(644) 00:14:55.314 fused_ordering(645) 00:14:55.314 fused_ordering(646) 00:14:55.314 fused_ordering(647) 00:14:55.314 fused_ordering(648) 00:14:55.314 fused_ordering(649) 00:14:55.314 fused_ordering(650) 00:14:55.314 fused_ordering(651) 00:14:55.314 fused_ordering(652) 00:14:55.314 fused_ordering(653) 00:14:55.314 fused_ordering(654) 00:14:55.314 fused_ordering(655) 00:14:55.314 fused_ordering(656) 00:14:55.314 fused_ordering(657) 00:14:55.314 fused_ordering(658) 00:14:55.314 fused_ordering(659) 00:14:55.314 fused_ordering(660) 00:14:55.314 fused_ordering(661) 00:14:55.314 fused_ordering(662) 00:14:55.314 fused_ordering(663) 00:14:55.314 fused_ordering(664) 00:14:55.314 fused_ordering(665) 00:14:55.314 fused_ordering(666) 00:14:55.314 fused_ordering(667) 00:14:55.314 fused_ordering(668) 00:14:55.314 fused_ordering(669) 00:14:55.314 fused_ordering(670) 00:14:55.314 fused_ordering(671) 00:14:55.314 fused_ordering(672) 00:14:55.314 fused_ordering(673) 00:14:55.314 fused_ordering(674) 00:14:55.314 fused_ordering(675) 00:14:55.314 fused_ordering(676) 00:14:55.314 fused_ordering(677) 00:14:55.314 fused_ordering(678) 00:14:55.314 fused_ordering(679) 00:14:55.314 fused_ordering(680) 00:14:55.314 fused_ordering(681) 00:14:55.314 fused_ordering(682) 00:14:55.314 fused_ordering(683) 00:14:55.314 fused_ordering(684) 00:14:55.314 fused_ordering(685) 00:14:55.314 fused_ordering(686) 00:14:55.314 fused_ordering(687) 00:14:55.314 fused_ordering(688) 00:14:55.314 fused_ordering(689) 00:14:55.314 fused_ordering(690) 00:14:55.314 fused_ordering(691) 00:14:55.314 fused_ordering(692) 00:14:55.314 fused_ordering(693) 00:14:55.314 fused_ordering(694) 00:14:55.314 fused_ordering(695) 00:14:55.314 fused_ordering(696) 00:14:55.314 fused_ordering(697) 00:14:55.314 fused_ordering(698) 00:14:55.314 fused_ordering(699) 00:14:55.314 fused_ordering(700) 00:14:55.314 fused_ordering(701) 00:14:55.314 fused_ordering(702) 00:14:55.314 fused_ordering(703) 00:14:55.314 fused_ordering(704) 00:14:55.314 fused_ordering(705) 00:14:55.314 fused_ordering(706) 00:14:55.314 fused_ordering(707) 00:14:55.314 fused_ordering(708) 00:14:55.314 fused_ordering(709) 00:14:55.314 fused_ordering(710) 00:14:55.314 fused_ordering(711) 00:14:55.314 fused_ordering(712) 00:14:55.314 fused_ordering(713) 00:14:55.314 fused_ordering(714) 00:14:55.314 fused_ordering(715) 00:14:55.314 fused_ordering(716) 00:14:55.314 fused_ordering(717) 00:14:55.314 fused_ordering(718) 00:14:55.314 fused_ordering(719) 00:14:55.314 fused_ordering(720) 00:14:55.314 fused_ordering(721) 00:14:55.314 fused_ordering(722) 00:14:55.314 fused_ordering(723) 00:14:55.314 fused_ordering(724) 00:14:55.314 fused_ordering(725) 00:14:55.314 fused_ordering(726) 00:14:55.314 fused_ordering(727) 00:14:55.314 fused_ordering(728) 00:14:55.314 fused_ordering(729) 00:14:55.314 fused_ordering(730) 00:14:55.314 fused_ordering(731) 00:14:55.314 fused_ordering(732) 00:14:55.314 fused_ordering(733) 00:14:55.314 fused_ordering(734) 00:14:55.314 fused_ordering(735) 00:14:55.314 fused_ordering(736) 00:14:55.314 fused_ordering(737) 00:14:55.314 fused_ordering(738) 00:14:55.314 fused_ordering(739) 00:14:55.314 fused_ordering(740) 00:14:55.314 fused_ordering(741) 00:14:55.314 fused_ordering(742) 00:14:55.314 fused_ordering(743) 00:14:55.314 fused_ordering(744) 00:14:55.314 fused_ordering(745) 00:14:55.314 fused_ordering(746) 00:14:55.314 fused_ordering(747) 00:14:55.314 fused_ordering(748) 00:14:55.314 fused_ordering(749) 00:14:55.314 fused_ordering(750) 00:14:55.314 fused_ordering(751) 00:14:55.314 fused_ordering(752) 00:14:55.314 fused_ordering(753) 00:14:55.314 fused_ordering(754) 00:14:55.314 fused_ordering(755) 00:14:55.314 fused_ordering(756) 00:14:55.314 fused_ordering(757) 00:14:55.314 fused_ordering(758) 00:14:55.314 fused_ordering(759) 00:14:55.314 fused_ordering(760) 00:14:55.314 fused_ordering(761) 00:14:55.314 fused_ordering(762) 00:14:55.314 fused_ordering(763) 00:14:55.314 fused_ordering(764) 00:14:55.314 fused_ordering(765) 00:14:55.314 fused_ordering(766) 00:14:55.314 fused_ordering(767) 00:14:55.314 fused_ordering(768) 00:14:55.314 fused_ordering(769) 00:14:55.314 fused_ordering(770) 00:14:55.314 fused_ordering(771) 00:14:55.314 fused_ordering(772) 00:14:55.314 fused_ordering(773) 00:14:55.314 fused_ordering(774) 00:14:55.314 fused_ordering(775) 00:14:55.314 fused_ordering(776) 00:14:55.314 fused_ordering(777) 00:14:55.314 fused_ordering(778) 00:14:55.314 fused_ordering(779) 00:14:55.314 fused_ordering(780) 00:14:55.314 fused_ordering(781) 00:14:55.314 fused_ordering(782) 00:14:55.314 fused_ordering(783) 00:14:55.314 fused_ordering(784) 00:14:55.314 fused_ordering(785) 00:14:55.314 fused_ordering(786) 00:14:55.314 fused_ordering(787) 00:14:55.314 fused_ordering(788) 00:14:55.314 fused_ordering(789) 00:14:55.314 fused_ordering(790) 00:14:55.314 fused_ordering(791) 00:14:55.314 fused_ordering(792) 00:14:55.314 fused_ordering(793) 00:14:55.314 fused_ordering(794) 00:14:55.314 fused_ordering(795) 00:14:55.314 fused_ordering(796) 00:14:55.314 fused_ordering(797) 00:14:55.314 fused_ordering(798) 00:14:55.314 fused_ordering(799) 00:14:55.314 fused_ordering(800) 00:14:55.314 fused_ordering(801) 00:14:55.314 fused_ordering(802) 00:14:55.314 fused_ordering(803) 00:14:55.314 fused_ordering(804) 00:14:55.314 fused_ordering(805) 00:14:55.314 fused_ordering(806) 00:14:55.314 fused_ordering(807) 00:14:55.314 fused_ordering(808) 00:14:55.314 fused_ordering(809) 00:14:55.314 fused_ordering(810) 00:14:55.314 fused_ordering(811) 00:14:55.314 fused_ordering(812) 00:14:55.314 fused_ordering(813) 00:14:55.314 fused_ordering(814) 00:14:55.314 fused_ordering(815) 00:14:55.314 fused_ordering(816) 00:14:55.314 fused_ordering(817) 00:14:55.314 fused_ordering(818) 00:14:55.314 fused_ordering(819) 00:14:55.314 fused_ordering(820) 00:14:55.885 fused_ordering(821) 00:14:55.885 fused_ordering(822) 00:14:55.885 fused_ordering(823) 00:14:55.885 fused_ordering(824) 00:14:55.885 fused_ordering(825) 00:14:55.885 fused_ordering(826) 00:14:55.885 fused_ordering(827) 00:14:55.885 fused_ordering(828) 00:14:55.885 fused_ordering(829) 00:14:55.885 fused_ordering(830) 00:14:55.885 fused_ordering(831) 00:14:55.885 fused_ordering(832) 00:14:55.885 fused_ordering(833) 00:14:55.885 fused_ordering(834) 00:14:55.885 fused_ordering(835) 00:14:55.885 fused_ordering(836) 00:14:55.885 fused_ordering(837) 00:14:55.885 fused_ordering(838) 00:14:55.885 fused_ordering(839) 00:14:55.885 fused_ordering(840) 00:14:55.885 fused_ordering(841) 00:14:55.885 fused_ordering(842) 00:14:55.885 fused_ordering(843) 00:14:55.885 fused_ordering(844) 00:14:55.886 fused_ordering(845) 00:14:55.886 fused_ordering(846) 00:14:55.886 fused_ordering(847) 00:14:55.886 fused_ordering(848) 00:14:55.886 fused_ordering(849) 00:14:55.886 fused_ordering(850) 00:14:55.886 fused_ordering(851) 00:14:55.886 fused_ordering(852) 00:14:55.886 fused_ordering(853) 00:14:55.886 fused_ordering(854) 00:14:55.886 fused_ordering(855) 00:14:55.886 fused_ordering(856) 00:14:55.886 fused_ordering(857) 00:14:55.886 fused_ordering(858) 00:14:55.886 fused_ordering(859) 00:14:55.886 fused_ordering(860) 00:14:55.886 fused_ordering(861) 00:14:55.886 fused_ordering(862) 00:14:55.886 fused_ordering(863) 00:14:55.886 fused_ordering(864) 00:14:55.886 fused_ordering(865) 00:14:55.886 fused_ordering(866) 00:14:55.886 fused_ordering(867) 00:14:55.886 fused_ordering(868) 00:14:55.886 fused_ordering(869) 00:14:55.886 fused_ordering(870) 00:14:55.886 fused_ordering(871) 00:14:55.886 fused_ordering(872) 00:14:55.886 fused_ordering(873) 00:14:55.886 fused_ordering(874) 00:14:55.886 fused_ordering(875) 00:14:55.886 fused_ordering(876) 00:14:55.886 fused_ordering(877) 00:14:55.886 fused_ordering(878) 00:14:55.886 fused_ordering(879) 00:14:55.886 fused_ordering(880) 00:14:55.886 fused_ordering(881) 00:14:55.886 fused_ordering(882) 00:14:55.886 fused_ordering(883) 00:14:55.886 fused_ordering(884) 00:14:55.886 fused_ordering(885) 00:14:55.886 fused_ordering(886) 00:14:55.886 fused_ordering(887) 00:14:55.886 fused_ordering(888) 00:14:55.886 fused_ordering(889) 00:14:55.886 fused_ordering(890) 00:14:55.886 fused_ordering(891) 00:14:55.886 fused_ordering(892) 00:14:55.886 fused_ordering(893) 00:14:55.886 fused_ordering(894) 00:14:55.886 fused_ordering(895) 00:14:55.886 fused_ordering(896) 00:14:55.886 fused_ordering(897) 00:14:55.886 fused_ordering(898) 00:14:55.886 fused_ordering(899) 00:14:55.886 fused_ordering(900) 00:14:55.886 fused_ordering(901) 00:14:55.886 fused_ordering(902) 00:14:55.886 fused_ordering(903) 00:14:55.886 fused_ordering(904) 00:14:55.886 fused_ordering(905) 00:14:55.886 fused_ordering(906) 00:14:55.886 fused_ordering(907) 00:14:55.886 fused_ordering(908) 00:14:55.886 fused_ordering(909) 00:14:55.886 fused_ordering(910) 00:14:55.886 fused_ordering(911) 00:14:55.886 fused_ordering(912) 00:14:55.886 fused_ordering(913) 00:14:55.886 fused_ordering(914) 00:14:55.886 fused_ordering(915) 00:14:55.886 fused_ordering(916) 00:14:55.886 fused_ordering(917) 00:14:55.886 fused_ordering(918) 00:14:55.886 fused_ordering(919) 00:14:55.886 fused_ordering(920) 00:14:55.886 fused_ordering(921) 00:14:55.886 fused_ordering(922) 00:14:55.886 fused_ordering(923) 00:14:55.886 fused_ordering(924) 00:14:55.886 fused_ordering(925) 00:14:55.886 fused_ordering(926) 00:14:55.886 fused_ordering(927) 00:14:55.886 fused_ordering(928) 00:14:55.886 fused_ordering(929) 00:14:55.886 fused_ordering(930) 00:14:55.886 fused_ordering(931) 00:14:55.886 fused_ordering(932) 00:14:55.886 fused_ordering(933) 00:14:55.886 fused_ordering(934) 00:14:55.886 fused_ordering(935) 00:14:55.886 fused_ordering(936) 00:14:55.886 fused_ordering(937) 00:14:55.886 fused_ordering(938) 00:14:55.886 fused_ordering(939) 00:14:55.886 fused_ordering(940) 00:14:55.886 fused_ordering(941) 00:14:55.886 fused_ordering(942) 00:14:55.886 fused_ordering(943) 00:14:55.886 fused_ordering(944) 00:14:55.886 fused_ordering(945) 00:14:55.886 fused_ordering(946) 00:14:55.886 fused_ordering(947) 00:14:55.886 fused_ordering(948) 00:14:55.886 fused_ordering(949) 00:14:55.886 fused_ordering(950) 00:14:55.886 fused_ordering(951) 00:14:55.886 fused_ordering(952) 00:14:55.886 fused_ordering(953) 00:14:55.886 fused_ordering(954) 00:14:55.886 fused_ordering(955) 00:14:55.886 fused_ordering(956) 00:14:55.886 fused_ordering(957) 00:14:55.886 fused_ordering(958) 00:14:55.886 fused_ordering(959) 00:14:55.886 fused_ordering(960) 00:14:55.886 fused_ordering(961) 00:14:55.886 fused_ordering(962) 00:14:55.886 fused_ordering(963) 00:14:55.886 fused_ordering(964) 00:14:55.886 fused_ordering(965) 00:14:55.886 fused_ordering(966) 00:14:55.886 fused_ordering(967) 00:14:55.886 fused_ordering(968) 00:14:55.886 fused_ordering(969) 00:14:55.886 fused_ordering(970) 00:14:55.886 fused_ordering(971) 00:14:55.886 fused_ordering(972) 00:14:55.886 fused_ordering(973) 00:14:55.886 fused_ordering(974) 00:14:55.886 fused_ordering(975) 00:14:55.886 fused_ordering(976) 00:14:55.886 fused_ordering(977) 00:14:55.886 fused_ordering(978) 00:14:55.886 fused_ordering(979) 00:14:55.886 fused_ordering(980) 00:14:55.886 fused_ordering(981) 00:14:55.886 fused_ordering(982) 00:14:55.886 fused_ordering(983) 00:14:55.886 fused_ordering(984) 00:14:55.886 fused_ordering(985) 00:14:55.886 fused_ordering(986) 00:14:55.886 fused_ordering(987) 00:14:55.886 fused_ordering(988) 00:14:55.886 fused_ordering(989) 00:14:55.886 fused_ordering(990) 00:14:55.886 fused_ordering(991) 00:14:55.886 fused_ordering(992) 00:14:55.886 fused_ordering(993) 00:14:55.886 fused_ordering(994) 00:14:55.886 fused_ordering(995) 00:14:55.886 fused_ordering(996) 00:14:55.886 fused_ordering(997) 00:14:55.886 fused_ordering(998) 00:14:55.886 fused_ordering(999) 00:14:55.886 fused_ordering(1000) 00:14:55.886 fused_ordering(1001) 00:14:55.886 fused_ordering(1002) 00:14:55.886 fused_ordering(1003) 00:14:55.886 fused_ordering(1004) 00:14:55.886 fused_ordering(1005) 00:14:55.886 fused_ordering(1006) 00:14:55.886 fused_ordering(1007) 00:14:55.886 fused_ordering(1008) 00:14:55.886 fused_ordering(1009) 00:14:55.886 fused_ordering(1010) 00:14:55.886 fused_ordering(1011) 00:14:55.886 fused_ordering(1012) 00:14:55.886 fused_ordering(1013) 00:14:55.886 fused_ordering(1014) 00:14:55.886 fused_ordering(1015) 00:14:55.886 fused_ordering(1016) 00:14:55.886 fused_ordering(1017) 00:14:55.886 fused_ordering(1018) 00:14:55.886 fused_ordering(1019) 00:14:55.886 fused_ordering(1020) 00:14:55.886 fused_ordering(1021) 00:14:55.886 fused_ordering(1022) 00:14:55.886 fused_ordering(1023) 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:56.147 rmmod nvme_tcp 00:14:56.147 rmmod nvme_fabrics 00:14:56.147 rmmod nvme_keyring 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 538418 ']' 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 538418 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 538418 ']' 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 538418 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 538418 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 538418' 00:14:56.147 killing process with pid 538418 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 538418 00:14:56.147 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 538418 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.408 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:58.321 00:14:58.321 real 0m13.514s 00:14:58.321 user 0m7.202s 00:14:58.321 sys 0m7.207s 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.321 ************************************ 00:14:58.321 END TEST nvmf_fused_ordering 00:14:58.321 ************************************ 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.321 15:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.582 ************************************ 00:14:58.582 START TEST nvmf_ns_masking 00:14:58.582 ************************************ 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:58.582 * Looking for test storage... 00:14:58.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.582 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.583 --rc genhtml_branch_coverage=1 00:14:58.583 --rc genhtml_function_coverage=1 00:14:58.583 --rc genhtml_legend=1 00:14:58.583 --rc geninfo_all_blocks=1 00:14:58.583 --rc geninfo_unexecuted_blocks=1 00:14:58.583 00:14:58.583 ' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.583 --rc genhtml_branch_coverage=1 00:14:58.583 --rc genhtml_function_coverage=1 00:14:58.583 --rc genhtml_legend=1 00:14:58.583 --rc geninfo_all_blocks=1 00:14:58.583 --rc geninfo_unexecuted_blocks=1 00:14:58.583 00:14:58.583 ' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.583 --rc genhtml_branch_coverage=1 00:14:58.583 --rc genhtml_function_coverage=1 00:14:58.583 --rc genhtml_legend=1 00:14:58.583 --rc geninfo_all_blocks=1 00:14:58.583 --rc geninfo_unexecuted_blocks=1 00:14:58.583 00:14:58.583 ' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.583 --rc genhtml_branch_coverage=1 00:14:58.583 --rc genhtml_function_coverage=1 00:14:58.583 --rc genhtml_legend=1 00:14:58.583 --rc geninfo_all_blocks=1 00:14:58.583 --rc geninfo_unexecuted_blocks=1 00:14:58.583 00:14:58.583 ' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.583 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b277a20f-1c1e-4da0-b6a2-ce2f0dcb0159 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f8a2c467-17d3-4fb1-aa46-9e240e5b0da0 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:58.845 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d10f3fa7-e856-4165-9571-bcd5f122c90a 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:58.846 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.072 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:07.073 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:07.073 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:07.073 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:07.073 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:07.073 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:07.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:15:07.073 00:15:07.073 --- 10.0.0.2 ping statistics --- 00:15:07.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.073 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:07.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:15:07.073 00:15:07.073 --- 10.0.0.1 ping statistics --- 00:15:07.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.073 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:07.073 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=543326 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 543326 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 543326 ']' 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.074 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.074 [2024-11-20 15:24:55.241260] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:15:07.074 [2024-11-20 15:24:55.241325] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.074 [2024-11-20 15:24:55.341210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.074 [2024-11-20 15:24:55.392222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.074 [2024-11-20 15:24:55.392273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.074 [2024-11-20 15:24:55.392282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.074 [2024-11-20 15:24:55.392290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.074 [2024-11-20 15:24:55.392296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.074 [2024-11-20 15:24:55.393091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.333 [2024-11-20 15:24:56.259663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:07.333 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:07.593 Malloc1 00:15:07.593 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:07.853 Malloc2 00:15:07.853 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:08.116 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:08.116 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.376 [2024-11-20 15:24:57.202187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.376 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:08.376 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d10f3fa7-e856-4165-9571-bcd5f122c90a -a 10.0.0.2 -s 4420 -i 4 00:15:08.636 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.636 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:08.636 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.636 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:08.637 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.573 [ 0]:0x1 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.573 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56021847317a41a597b6aa8fef2b2d4d 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56021847317a41a597b6aa8fef2b2d4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.836 [ 0]:0x1 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.836 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56021847317a41a597b6aa8fef2b2d4d 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56021847317a41a597b6aa8fef2b2d4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.096 [ 1]:0x2 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.096 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.356 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d10f3fa7-e856-4165-9571-bcd5f122c90a -a 10.0.0.2 -s 4420 -i 4 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:11.616 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:14.163 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.164 [ 0]:0x2 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.164 [ 0]:0x1 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56021847317a41a597b6aa8fef2b2d4d 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56021847317a41a597b6aa8fef2b2d4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.164 [ 1]:0x2 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.164 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.424 [ 0]:0x2 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.424 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.686 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:14.686 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d10f3fa7-e856-4165-9571-bcd5f122c90a -a 10.0.0.2 -s 4420 -i 4 00:15:14.946 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:14.946 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:14.946 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.946 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:14.946 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:14.946 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:16.857 [ 0]:0x1 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:16.857 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56021847317a41a597b6aa8fef2b2d4d 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56021847317a41a597b6aa8fef2b2d4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.117 [ 1]:0x2 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.117 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.378 [ 0]:0x2 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:17.378 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:17.639 [2024-11-20 15:25:06.359639] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:17.639 request: 00:15:17.639 { 00:15:17.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.639 "nsid": 2, 00:15:17.639 "host": "nqn.2016-06.io.spdk:host1", 00:15:17.639 "method": "nvmf_ns_remove_host", 00:15:17.639 "req_id": 1 00:15:17.639 } 00:15:17.639 Got JSON-RPC error response 00:15:17.639 response: 00:15:17.639 { 00:15:17.639 "code": -32602, 00:15:17.639 "message": "Invalid parameters" 00:15:17.639 } 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.639 [ 0]:0x2 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=636f62d23e68437d848e3369d93d4ffd 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 636f62d23e68437d848e3369d93d4ffd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=545619 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.639 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 545619 /var/tmp/host.sock 00:15:17.640 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 545619 ']' 00:15:17.640 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:17.640 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.640 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:17.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:17.640 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.640 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.901 [2024-11-20 15:25:06.600001] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:15:17.901 [2024-11-20 15:25:06.600052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545619 ] 00:15:17.901 [2024-11-20 15:25:06.687286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.901 [2024-11-20 15:25:06.723216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.472 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.472 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:18.472 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.733 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:18.994 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b277a20f-1c1e-4da0-b6a2-ce2f0dcb0159 00:15:18.994 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:18.994 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B277A20F1C1E4DA0B6A2CE2F0DCB0159 -i 00:15:18.994 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f8a2c467-17d3-4fb1-aa46-9e240e5b0da0 00:15:18.994 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:19.254 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F8A2C46717D34FB1AA469E240E5B0DA0 -i 00:15:19.254 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:19.515 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:19.775 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:19.775 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:19.775 nvme0n1 00:15:20.035 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:20.035 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:20.296 nvme1n2 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:20.296 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:20.556 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b277a20f-1c1e-4da0-b6a2-ce2f0dcb0159 == \b\2\7\7\a\2\0\f\-\1\c\1\e\-\4\d\a\0\-\b\6\a\2\-\c\e\2\f\0\d\c\b\0\1\5\9 ]] 00:15:20.556 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:20.556 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:20.556 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:20.816 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f8a2c467-17d3-4fb1-aa46-9e240e5b0da0 == \f\8\a\2\c\4\6\7\-\1\7\d\3\-\4\f\b\1\-\a\a\4\6\-\9\e\2\4\0\e\5\b\0\d\a\0 ]] 00:15:20.816 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.816 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid b277a20f-1c1e-4da0-b6a2-ce2f0dcb0159 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B277A20F1C1E4DA0B6A2CE2F0DCB0159 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B277A20F1C1E4DA0B6A2CE2F0DCB0159 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:21.077 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B277A20F1C1E4DA0B6A2CE2F0DCB0159 00:15:21.337 [2024-11-20 15:25:10.073445] bdev.c:8437:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:21.337 [2024-11-20 15:25:10.073475] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:21.337 [2024-11-20 15:25:10.073484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.337 request: 00:15:21.337 { 00:15:21.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:21.337 "namespace": { 00:15:21.337 "bdev_name": "invalid", 00:15:21.337 "nsid": 1, 00:15:21.337 "nguid": "B277A20F1C1E4DA0B6A2CE2F0DCB0159", 00:15:21.337 "no_auto_visible": false 00:15:21.337 }, 00:15:21.337 "method": "nvmf_subsystem_add_ns", 00:15:21.337 "req_id": 1 00:15:21.337 } 00:15:21.337 Got JSON-RPC error response 00:15:21.337 response: 00:15:21.337 { 00:15:21.337 "code": -32602, 00:15:21.337 "message": "Invalid parameters" 00:15:21.337 } 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid b277a20f-1c1e-4da0-b6a2-ce2f0dcb0159 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B277A20F1C1E4DA0B6A2CE2F0DCB0159 -i 00:15:21.337 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 545619 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 545619 ']' 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 545619 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 545619 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 545619' 00:15:23.881 killing process with pid 545619 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 545619 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 545619 00:15:23.881 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.141 rmmod nvme_tcp 00:15:24.141 rmmod nvme_fabrics 00:15:24.141 rmmod nvme_keyring 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 543326 ']' 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 543326 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 543326 ']' 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 543326 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.141 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543326 00:15:24.141 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.141 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.141 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543326' 00:15:24.141 killing process with pid 543326 00:15:24.141 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 543326 00:15:24.141 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 543326 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.402 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.314 00:15:26.314 real 0m27.910s 00:15:26.314 user 0m31.424s 00:15:26.314 sys 0m8.414s 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:26.314 ************************************ 00:15:26.314 END TEST nvmf_ns_masking 00:15:26.314 ************************************ 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.314 15:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.576 ************************************ 00:15:26.576 START TEST nvmf_nvme_cli 00:15:26.576 ************************************ 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:26.576 * Looking for test storage... 00:15:26.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.576 --rc genhtml_branch_coverage=1 00:15:26.576 --rc genhtml_function_coverage=1 00:15:26.576 --rc genhtml_legend=1 00:15:26.576 --rc geninfo_all_blocks=1 00:15:26.576 --rc geninfo_unexecuted_blocks=1 00:15:26.576 00:15:26.576 ' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.576 --rc genhtml_branch_coverage=1 00:15:26.576 --rc genhtml_function_coverage=1 00:15:26.576 --rc genhtml_legend=1 00:15:26.576 --rc geninfo_all_blocks=1 00:15:26.576 --rc geninfo_unexecuted_blocks=1 00:15:26.576 00:15:26.576 ' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.576 --rc genhtml_branch_coverage=1 00:15:26.576 --rc genhtml_function_coverage=1 00:15:26.576 --rc genhtml_legend=1 00:15:26.576 --rc geninfo_all_blocks=1 00:15:26.576 --rc geninfo_unexecuted_blocks=1 00:15:26.576 00:15:26.576 ' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.576 --rc genhtml_branch_coverage=1 00:15:26.576 --rc genhtml_function_coverage=1 00:15:26.576 --rc genhtml_legend=1 00:15:26.576 --rc geninfo_all_blocks=1 00:15:26.576 --rc geninfo_unexecuted_blocks=1 00:15:26.576 00:15:26.576 ' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.576 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:26.837 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:34.976 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:34.977 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:34.977 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:34.977 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:34.977 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:34.977 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:34.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:15:34.977 00:15:34.977 --- 10.0.0.2 ping statistics --- 00:15:34.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.977 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:15:34.977 00:15:34.977 --- 10.0.0.1 ping statistics --- 00:15:34.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.977 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=551113 00:15:34.977 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 551113 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 551113 ']' 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.978 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.978 [2024-11-20 15:25:23.124834] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:15:34.978 [2024-11-20 15:25:23.124901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.978 [2024-11-20 15:25:23.224086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.978 [2024-11-20 15:25:23.278862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.978 [2024-11-20 15:25:23.278912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.978 [2024-11-20 15:25:23.278922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.978 [2024-11-20 15:25:23.278930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.978 [2024-11-20 15:25:23.278937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.978 [2024-11-20 15:25:23.281292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.978 [2024-11-20 15:25:23.281587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.978 [2024-11-20 15:25:23.281747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.978 [2024-11-20 15:25:23.281748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 [2024-11-20 15:25:24.006275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 Malloc0 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 Malloc1 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 [2024-11-20 15:25:24.126876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.240 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:35.500 00:15:35.500 Discovery Log Number of Records 2, Generation counter 2 00:15:35.500 =====Discovery Log Entry 0====== 00:15:35.500 trtype: tcp 00:15:35.500 adrfam: ipv4 00:15:35.500 subtype: current discovery subsystem 00:15:35.500 treq: not required 00:15:35.500 portid: 0 00:15:35.500 trsvcid: 4420 00:15:35.500 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:35.500 traddr: 10.0.0.2 00:15:35.500 eflags: explicit discovery connections, duplicate discovery information 00:15:35.500 sectype: none 00:15:35.500 =====Discovery Log Entry 1====== 00:15:35.500 trtype: tcp 00:15:35.500 adrfam: ipv4 00:15:35.500 subtype: nvme subsystem 00:15:35.500 treq: not required 00:15:35.500 portid: 0 00:15:35.500 trsvcid: 4420 00:15:35.500 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:35.500 traddr: 10.0.0.2 00:15:35.500 eflags: none 00:15:35.501 sectype: none 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:35.501 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.414 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:37.414 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:37.414 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.414 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:37.414 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:37.414 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:39.327 /dev/nvme0n2 ]] 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.327 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:39.327 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:39.327 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:39.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:39.328 rmmod nvme_tcp 00:15:39.328 rmmod nvme_fabrics 00:15:39.328 rmmod nvme_keyring 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 551113 ']' 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 551113 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 551113 ']' 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 551113 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.328 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551113 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551113' 00:15:39.589 killing process with pid 551113 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 551113 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 551113 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.589 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.151 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.151 00:15:42.151 real 0m15.205s 00:15:42.151 user 0m22.775s 00:15:42.151 sys 0m6.401s 00:15:42.151 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.151 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:42.151 ************************************ 00:15:42.151 END TEST nvmf_nvme_cli 00:15:42.152 ************************************ 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.152 ************************************ 00:15:42.152 START TEST nvmf_vfio_user 00:15:42.152 ************************************ 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:42.152 * Looking for test storage... 00:15:42.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.152 --rc genhtml_branch_coverage=1 00:15:42.152 --rc genhtml_function_coverage=1 00:15:42.152 --rc genhtml_legend=1 00:15:42.152 --rc geninfo_all_blocks=1 00:15:42.152 --rc geninfo_unexecuted_blocks=1 00:15:42.152 00:15:42.152 ' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.152 --rc genhtml_branch_coverage=1 00:15:42.152 --rc genhtml_function_coverage=1 00:15:42.152 --rc genhtml_legend=1 00:15:42.152 --rc geninfo_all_blocks=1 00:15:42.152 --rc geninfo_unexecuted_blocks=1 00:15:42.152 00:15:42.152 ' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.152 --rc genhtml_branch_coverage=1 00:15:42.152 --rc genhtml_function_coverage=1 00:15:42.152 --rc genhtml_legend=1 00:15:42.152 --rc geninfo_all_blocks=1 00:15:42.152 --rc geninfo_unexecuted_blocks=1 00:15:42.152 00:15:42.152 ' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.152 --rc genhtml_branch_coverage=1 00:15:42.152 --rc genhtml_function_coverage=1 00:15:42.152 --rc genhtml_legend=1 00:15:42.152 --rc geninfo_all_blocks=1 00:15:42.152 --rc geninfo_unexecuted_blocks=1 00:15:42.152 00:15:42.152 ' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.152 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=552822 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 552822' 00:15:42.153 Process pid: 552822 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 552822 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 552822 ']' 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.153 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:42.153 [2024-11-20 15:25:30.888956] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:15:42.153 [2024-11-20 15:25:30.889012] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.153 [2024-11-20 15:25:30.975024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.153 [2024-11-20 15:25:31.006529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.153 [2024-11-20 15:25:31.006557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.153 [2024-11-20 15:25:31.006564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.153 [2024-11-20 15:25:31.006568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.153 [2024-11-20 15:25:31.006572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.153 [2024-11-20 15:25:31.008026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.153 [2024-11-20 15:25:31.008196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.153 [2024-11-20 15:25:31.008284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.153 [2024-11-20 15:25:31.008466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.095 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.095 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:43.095 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:44.039 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:44.039 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:44.039 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:44.039 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.039 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:44.039 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:44.299 Malloc1 00:15:44.299 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:44.560 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:44.560 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:44.822 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.822 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:44.822 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:45.084 Malloc2 00:15:45.084 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:45.084 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:45.345 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:45.608 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:45.608 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:45.608 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.608 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:45.608 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:45.608 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:45.608 [2024-11-20 15:25:34.405544] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:15:45.608 [2024-11-20 15:25:34.405583] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553516 ] 00:15:45.608 [2024-11-20 15:25:34.445451] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:45.608 [2024-11-20 15:25:34.450706] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:45.608 [2024-11-20 15:25:34.450724] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcab5c3d000 00:15:45.608 [2024-11-20 15:25:34.451702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.452705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.453702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.454705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.455714] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.456723] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.457736] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.458742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.608 [2024-11-20 15:25:34.459749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:45.608 [2024-11-20 15:25:34.459756] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcab5c32000 00:15:45.608 [2024-11-20 15:25:34.460672] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:45.608 [2024-11-20 15:25:34.470114] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:45.608 [2024-11-20 15:25:34.470138] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:45.608 [2024-11-20 15:25:34.474850] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:45.608 [2024-11-20 15:25:34.474884] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:45.608 [2024-11-20 15:25:34.474945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:45.608 [2024-11-20 15:25:34.474960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:45.608 [2024-11-20 15:25:34.474964] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:45.608 [2024-11-20 15:25:34.475852] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:45.608 [2024-11-20 15:25:34.475860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:45.608 [2024-11-20 15:25:34.475865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:45.608 [2024-11-20 15:25:34.476862] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:45.608 [2024-11-20 15:25:34.476868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:45.608 [2024-11-20 15:25:34.476874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:45.609 [2024-11-20 15:25:34.477867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:45.609 [2024-11-20 15:25:34.477873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:45.609 [2024-11-20 15:25:34.478874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:45.609 [2024-11-20 15:25:34.478880] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:45.609 [2024-11-20 15:25:34.478884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:45.609 [2024-11-20 15:25:34.478889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:45.609 [2024-11-20 15:25:34.478995] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:45.609 [2024-11-20 15:25:34.478999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:45.609 [2024-11-20 15:25:34.479003] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:45.609 [2024-11-20 15:25:34.479884] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:45.609 [2024-11-20 15:25:34.480884] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:45.609 [2024-11-20 15:25:34.481885] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:45.609 [2024-11-20 15:25:34.482886] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.609 [2024-11-20 15:25:34.482944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:45.609 [2024-11-20 15:25:34.483894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:45.609 [2024-11-20 15:25:34.483900] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:45.609 [2024-11-20 15:25:34.483904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.483919] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:45.609 [2024-11-20 15:25:34.483929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.483940] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.609 [2024-11-20 15:25:34.483944] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.609 [2024-11-20 15:25:34.483947] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.609 [2024-11-20 15:25:34.483958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.483989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.483996] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:45.609 [2024-11-20 15:25:34.484000] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:45.609 [2024-11-20 15:25:34.484003] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:45.609 [2024-11-20 15:25:34.484006] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:45.609 [2024-11-20 15:25:34.484011] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:45.609 [2024-11-20 15:25:34.484015] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:45.609 [2024-11-20 15:25:34.484018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.484043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.484052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.609 [2024-11-20 15:25:34.484059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.609 [2024-11-20 15:25:34.484066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.609 [2024-11-20 15:25:34.484073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.609 [2024-11-20 15:25:34.484076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.484097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.484102] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:45.609 [2024-11-20 15:25:34.484106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.484129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.484176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:45.609 [2024-11-20 15:25:34.484191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:45.609 [2024-11-20 15:25:34.484193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.609 [2024-11-20 15:25:34.484198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.484216] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:45.609 [2024-11-20 15:25:34.484223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484235] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.609 [2024-11-20 15:25:34.484238] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.609 [2024-11-20 15:25:34.484240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.609 [2024-11-20 15:25:34.484245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.484262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.484272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.609 [2024-11-20 15:25:34.484286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.609 [2024-11-20 15:25:34.484288] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.609 [2024-11-20 15:25:34.484293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.609 [2024-11-20 15:25:34.484304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:45.609 [2024-11-20 15:25:34.484310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:45.609 [2024-11-20 15:25:34.484320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:45.610 [2024-11-20 15:25:34.484325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:45.610 [2024-11-20 15:25:34.484328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:45.610 [2024-11-20 15:25:34.484332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:45.610 [2024-11-20 15:25:34.484336] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:45.610 [2024-11-20 15:25:34.484339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:45.610 [2024-11-20 15:25:34.484343] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:45.610 [2024-11-20 15:25:34.484358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484426] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:45.610 [2024-11-20 15:25:34.484429] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:45.610 [2024-11-20 15:25:34.484431] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:45.610 [2024-11-20 15:25:34.484434] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:45.610 [2024-11-20 15:25:34.484436] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:45.610 [2024-11-20 15:25:34.484441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:45.610 [2024-11-20 15:25:34.484447] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:45.610 [2024-11-20 15:25:34.484449] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:45.610 [2024-11-20 15:25:34.484452] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.610 [2024-11-20 15:25:34.484456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:45.610 [2024-11-20 15:25:34.484465] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.610 [2024-11-20 15:25:34.484467] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.610 [2024-11-20 15:25:34.484471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484477] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:45.610 [2024-11-20 15:25:34.484480] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:45.610 [2024-11-20 15:25:34.484483] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.610 [2024-11-20 15:25:34.484487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:45.610 [2024-11-20 15:25:34.484492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:45.610 [2024-11-20 15:25:34.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:45.610 ===================================================== 00:15:45.610 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.610 ===================================================== 00:15:45.610 Controller Capabilities/Features 00:15:45.610 ================================ 00:15:45.610 Vendor ID: 4e58 00:15:45.610 Subsystem Vendor ID: 4e58 00:15:45.610 Serial Number: SPDK1 00:15:45.610 Model Number: SPDK bdev Controller 00:15:45.610 Firmware Version: 25.01 00:15:45.610 Recommended Arb Burst: 6 00:15:45.610 IEEE OUI Identifier: 8d 6b 50 00:15:45.610 Multi-path I/O 00:15:45.610 May have multiple subsystem ports: Yes 00:15:45.610 May have multiple controllers: Yes 00:15:45.610 Associated with SR-IOV VF: No 00:15:45.610 Max Data Transfer Size: 131072 00:15:45.610 Max Number of Namespaces: 32 00:15:45.610 Max Number of I/O Queues: 127 00:15:45.610 NVMe Specification Version (VS): 1.3 00:15:45.610 NVMe Specification Version (Identify): 1.3 00:15:45.610 Maximum Queue Entries: 256 00:15:45.610 Contiguous Queues Required: Yes 00:15:45.610 Arbitration Mechanisms Supported 00:15:45.610 Weighted Round Robin: Not Supported 00:15:45.610 Vendor Specific: Not Supported 00:15:45.610 Reset Timeout: 15000 ms 00:15:45.610 Doorbell Stride: 4 bytes 00:15:45.610 NVM Subsystem Reset: Not Supported 00:15:45.610 Command Sets Supported 00:15:45.610 NVM Command Set: Supported 00:15:45.610 Boot Partition: Not Supported 00:15:45.610 Memory Page Size Minimum: 4096 bytes 00:15:45.610 Memory Page Size Maximum: 4096 bytes 00:15:45.610 Persistent Memory Region: Not Supported 00:15:45.610 Optional Asynchronous Events Supported 00:15:45.610 Namespace Attribute Notices: Supported 00:15:45.610 Firmware Activation Notices: Not Supported 00:15:45.610 ANA Change Notices: Not Supported 00:15:45.610 PLE Aggregate Log Change Notices: Not Supported 00:15:45.610 LBA Status Info Alert Notices: Not Supported 00:15:45.610 EGE Aggregate Log Change Notices: Not Supported 00:15:45.610 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.610 Zone Descriptor Change Notices: Not Supported 00:15:45.610 Discovery Log Change Notices: Not Supported 00:15:45.610 Controller Attributes 00:15:45.610 128-bit Host Identifier: Supported 00:15:45.610 Non-Operational Permissive Mode: Not Supported 00:15:45.610 NVM Sets: Not Supported 00:15:45.610 Read Recovery Levels: Not Supported 00:15:45.610 Endurance Groups: Not Supported 00:15:45.610 Predictable Latency Mode: Not Supported 00:15:45.610 Traffic Based Keep ALive: Not Supported 00:15:45.610 Namespace Granularity: Not Supported 00:15:45.610 SQ Associations: Not Supported 00:15:45.610 UUID List: Not Supported 00:15:45.610 Multi-Domain Subsystem: Not Supported 00:15:45.610 Fixed Capacity Management: Not Supported 00:15:45.610 Variable Capacity Management: Not Supported 00:15:45.610 Delete Endurance Group: Not Supported 00:15:45.610 Delete NVM Set: Not Supported 00:15:45.610 Extended LBA Formats Supported: Not Supported 00:15:45.610 Flexible Data Placement Supported: Not Supported 00:15:45.610 00:15:45.610 Controller Memory Buffer Support 00:15:45.610 ================================ 00:15:45.610 Supported: No 00:15:45.610 00:15:45.610 Persistent Memory Region Support 00:15:45.610 ================================ 00:15:45.610 Supported: No 00:15:45.610 00:15:45.610 Admin Command Set Attributes 00:15:45.610 ============================ 00:15:45.610 Security Send/Receive: Not Supported 00:15:45.610 Format NVM: Not Supported 00:15:45.610 Firmware Activate/Download: Not Supported 00:15:45.611 Namespace Management: Not Supported 00:15:45.611 Device Self-Test: Not Supported 00:15:45.611 Directives: Not Supported 00:15:45.611 NVMe-MI: Not Supported 00:15:45.611 Virtualization Management: Not Supported 00:15:45.611 Doorbell Buffer Config: Not Supported 00:15:45.611 Get LBA Status Capability: Not Supported 00:15:45.611 Command & Feature Lockdown Capability: Not Supported 00:15:45.611 Abort Command Limit: 4 00:15:45.611 Async Event Request Limit: 4 00:15:45.611 Number of Firmware Slots: N/A 00:15:45.611 Firmware Slot 1 Read-Only: N/A 00:15:45.611 Firmware Activation Without Reset: N/A 00:15:45.611 Multiple Update Detection Support: N/A 00:15:45.611 Firmware Update Granularity: No Information Provided 00:15:45.611 Per-Namespace SMART Log: No 00:15:45.611 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.611 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:45.611 Command Effects Log Page: Supported 00:15:45.611 Get Log Page Extended Data: Supported 00:15:45.611 Telemetry Log Pages: Not Supported 00:15:45.611 Persistent Event Log Pages: Not Supported 00:15:45.611 Supported Log Pages Log Page: May Support 00:15:45.611 Commands Supported & Effects Log Page: Not Supported 00:15:45.611 Feature Identifiers & Effects Log Page:May Support 00:15:45.611 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.611 Data Area 4 for Telemetry Log: Not Supported 00:15:45.611 Error Log Page Entries Supported: 128 00:15:45.611 Keep Alive: Supported 00:15:45.611 Keep Alive Granularity: 10000 ms 00:15:45.611 00:15:45.611 NVM Command Set Attributes 00:15:45.611 ========================== 00:15:45.611 Submission Queue Entry Size 00:15:45.611 Max: 64 00:15:45.611 Min: 64 00:15:45.611 Completion Queue Entry Size 00:15:45.611 Max: 16 00:15:45.611 Min: 16 00:15:45.611 Number of Namespaces: 32 00:15:45.611 Compare Command: Supported 00:15:45.611 Write Uncorrectable Command: Not Supported 00:15:45.611 Dataset Management Command: Supported 00:15:45.611 Write Zeroes Command: Supported 00:15:45.611 Set Features Save Field: Not Supported 00:15:45.611 Reservations: Not Supported 00:15:45.611 Timestamp: Not Supported 00:15:45.611 Copy: Supported 00:15:45.611 Volatile Write Cache: Present 00:15:45.611 Atomic Write Unit (Normal): 1 00:15:45.611 Atomic Write Unit (PFail): 1 00:15:45.611 Atomic Compare & Write Unit: 1 00:15:45.611 Fused Compare & Write: Supported 00:15:45.611 Scatter-Gather List 00:15:45.611 SGL Command Set: Supported (Dword aligned) 00:15:45.611 SGL Keyed: Not Supported 00:15:45.611 SGL Bit Bucket Descriptor: Not Supported 00:15:45.611 SGL Metadata Pointer: Not Supported 00:15:45.611 Oversized SGL: Not Supported 00:15:45.611 SGL Metadata Address: Not Supported 00:15:45.611 SGL Offset: Not Supported 00:15:45.611 Transport SGL Data Block: Not Supported 00:15:45.611 Replay Protected Memory Block: Not Supported 00:15:45.611 00:15:45.611 Firmware Slot Information 00:15:45.611 ========================= 00:15:45.611 Active slot: 1 00:15:45.611 Slot 1 Firmware Revision: 25.01 00:15:45.611 00:15:45.611 00:15:45.611 Commands Supported and Effects 00:15:45.611 ============================== 00:15:45.611 Admin Commands 00:15:45.611 -------------- 00:15:45.611 Get Log Page (02h): Supported 00:15:45.611 Identify (06h): Supported 00:15:45.611 Abort (08h): Supported 00:15:45.611 Set Features (09h): Supported 00:15:45.611 Get Features (0Ah): Supported 00:15:45.611 Asynchronous Event Request (0Ch): Supported 00:15:45.611 Keep Alive (18h): Supported 00:15:45.611 I/O Commands 00:15:45.611 ------------ 00:15:45.611 Flush (00h): Supported LBA-Change 00:15:45.611 Write (01h): Supported LBA-Change 00:15:45.611 Read (02h): Supported 00:15:45.611 Compare (05h): Supported 00:15:45.611 Write Zeroes (08h): Supported LBA-Change 00:15:45.611 Dataset Management (09h): Supported LBA-Change 00:15:45.611 Copy (19h): Supported LBA-Change 00:15:45.611 00:15:45.611 Error Log 00:15:45.611 ========= 00:15:45.611 00:15:45.611 Arbitration 00:15:45.611 =========== 00:15:45.611 Arbitration Burst: 1 00:15:45.611 00:15:45.611 Power Management 00:15:45.611 ================ 00:15:45.611 Number of Power States: 1 00:15:45.611 Current Power State: Power State #0 00:15:45.611 Power State #0: 00:15:45.611 Max Power: 0.00 W 00:15:45.611 Non-Operational State: Operational 00:15:45.611 Entry Latency: Not Reported 00:15:45.611 Exit Latency: Not Reported 00:15:45.611 Relative Read Throughput: 0 00:15:45.611 Relative Read Latency: 0 00:15:45.611 Relative Write Throughput: 0 00:15:45.611 Relative Write Latency: 0 00:15:45.611 Idle Power: Not Reported 00:15:45.611 Active Power: Not Reported 00:15:45.611 Non-Operational Permissive Mode: Not Supported 00:15:45.611 00:15:45.611 Health Information 00:15:45.611 ================== 00:15:45.611 Critical Warnings: 00:15:45.611 Available Spare Space: OK 00:15:45.611 Temperature: OK 00:15:45.611 Device Reliability: OK 00:15:45.611 Read Only: No 00:15:45.611 Volatile Memory Backup: OK 00:15:45.611 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:45.611 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:45.611 Available Spare: 0% 00:15:45.611 Available Sp[2024-11-20 15:25:34.484587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:45.611 [2024-11-20 15:25:34.484600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:45.611 [2024-11-20 15:25:34.484620] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:45.611 [2024-11-20 15:25:34.484627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.611 [2024-11-20 15:25:34.484632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.611 [2024-11-20 15:25:34.484636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.611 [2024-11-20 15:25:34.484642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.611 [2024-11-20 15:25:34.484901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:45.611 [2024-11-20 15:25:34.484908] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:45.611 [2024-11-20 15:25:34.485902] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.611 [2024-11-20 15:25:34.485943] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:45.611 [2024-11-20 15:25:34.485948] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:45.611 [2024-11-20 15:25:34.486916] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:45.611 [2024-11-20 15:25:34.486924] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:45.611 [2024-11-20 15:25:34.486979] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:45.611 [2024-11-20 15:25:34.489164] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:45.611 are Threshold: 0% 00:15:45.611 Life Percentage Used: 0% 00:15:45.612 Data Units Read: 0 00:15:45.612 Data Units Written: 0 00:15:45.612 Host Read Commands: 0 00:15:45.612 Host Write Commands: 0 00:15:45.612 Controller Busy Time: 0 minutes 00:15:45.612 Power Cycles: 0 00:15:45.612 Power On Hours: 0 hours 00:15:45.612 Unsafe Shutdowns: 0 00:15:45.612 Unrecoverable Media Errors: 0 00:15:45.612 Lifetime Error Log Entries: 0 00:15:45.612 Warning Temperature Time: 0 minutes 00:15:45.612 Critical Temperature Time: 0 minutes 00:15:45.612 00:15:45.612 Number of Queues 00:15:45.612 ================ 00:15:45.612 Number of I/O Submission Queues: 127 00:15:45.612 Number of I/O Completion Queues: 127 00:15:45.612 00:15:45.612 Active Namespaces 00:15:45.612 ================= 00:15:45.612 Namespace ID:1 00:15:45.612 Error Recovery Timeout: Unlimited 00:15:45.612 Command Set Identifier: NVM (00h) 00:15:45.612 Deallocate: Supported 00:15:45.612 Deallocated/Unwritten Error: Not Supported 00:15:45.612 Deallocated Read Value: Unknown 00:15:45.612 Deallocate in Write Zeroes: Not Supported 00:15:45.612 Deallocated Guard Field: 0xFFFF 00:15:45.612 Flush: Supported 00:15:45.612 Reservation: Supported 00:15:45.612 Namespace Sharing Capabilities: Multiple Controllers 00:15:45.612 Size (in LBAs): 131072 (0GiB) 00:15:45.612 Capacity (in LBAs): 131072 (0GiB) 00:15:45.612 Utilization (in LBAs): 131072 (0GiB) 00:15:45.612 NGUID: 819E3D8A57AA4631830839E4ECBB19AD 00:15:45.612 UUID: 819e3d8a-57aa-4631-8308-39e4ecbb19ad 00:15:45.612 Thin Provisioning: Not Supported 00:15:45.612 Per-NS Atomic Units: Yes 00:15:45.612 Atomic Boundary Size (Normal): 0 00:15:45.612 Atomic Boundary Size (PFail): 0 00:15:45.612 Atomic Boundary Offset: 0 00:15:45.612 Maximum Single Source Range Length: 65535 00:15:45.612 Maximum Copy Length: 65535 00:15:45.612 Maximum Source Range Count: 1 00:15:45.612 NGUID/EUI64 Never Reused: No 00:15:45.612 Namespace Write Protected: No 00:15:45.612 Number of LBA Formats: 1 00:15:45.612 Current LBA Format: LBA Format #00 00:15:45.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:45.612 00:15:45.612 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:45.873 [2024-11-20 15:25:34.676846] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.162 Initializing NVMe Controllers 00:15:51.162 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:51.162 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:51.162 Initialization complete. Launching workers. 00:15:51.162 ======================================================== 00:15:51.162 Latency(us) 00:15:51.162 Device Information : IOPS MiB/s Average min max 00:15:51.162 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39967.66 156.12 3202.26 845.94 7775.47 00:15:51.162 ======================================================== 00:15:51.162 Total : 39967.66 156.12 3202.26 845.94 7775.47 00:15:51.162 00:15:51.162 [2024-11-20 15:25:39.693899] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.162 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:51.162 [2024-11-20 15:25:39.887777] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.449 Initializing NVMe Controllers 00:15:56.449 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:56.449 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:56.449 Initialization complete. Launching workers. 00:15:56.449 ======================================================== 00:15:56.449 Latency(us) 00:15:56.449 Device Information : IOPS MiB/s Average min max 00:15:56.449 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.69 6982.36 8980.26 00:15:56.449 ======================================================== 00:15:56.449 Total : 16051.20 62.70 7980.69 6982.36 8980.26 00:15:56.449 00:15:56.449 [2024-11-20 15:25:44.920751] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.449 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:56.449 [2024-11-20 15:25:45.130584] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.817 [2024-11-20 15:25:50.214413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.817 Initializing NVMe Controllers 00:16:01.817 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:01.817 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:01.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:01.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:01.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:01.817 Initialization complete. Launching workers. 00:16:01.817 Starting thread on core 2 00:16:01.817 Starting thread on core 3 00:16:01.817 Starting thread on core 1 00:16:01.817 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:01.817 [2024-11-20 15:25:50.467228] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:05.118 [2024-11-20 15:25:53.535106] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.118 Initializing NVMe Controllers 00:16:05.118 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.118 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.118 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:05.118 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:05.118 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:05.118 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:05.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:05.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:05.118 Initialization complete. Launching workers. 00:16:05.118 Starting thread on core 1 with urgent priority queue 00:16:05.118 Starting thread on core 2 with urgent priority queue 00:16:05.118 Starting thread on core 3 with urgent priority queue 00:16:05.118 Starting thread on core 0 with urgent priority queue 00:16:05.118 SPDK bdev Controller (SPDK1 ) core 0: 8290.67 IO/s 12.06 secs/100000 ios 00:16:05.118 SPDK bdev Controller (SPDK1 ) core 1: 9513.33 IO/s 10.51 secs/100000 ios 00:16:05.118 SPDK bdev Controller (SPDK1 ) core 2: 10366.00 IO/s 9.65 secs/100000 ios 00:16:05.118 SPDK bdev Controller (SPDK1 ) core 3: 9429.00 IO/s 10.61 secs/100000 ios 00:16:05.118 ======================================================== 00:16:05.118 00:16:05.118 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:05.118 [2024-11-20 15:25:53.785600] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:05.118 Initializing NVMe Controllers 00:16:05.118 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.118 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.118 Namespace ID: 1 size: 0GB 00:16:05.118 Initialization complete. 00:16:05.118 INFO: using host memory buffer for IO 00:16:05.118 Hello world! 00:16:05.118 [2024-11-20 15:25:53.819820] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.118 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:05.118 [2024-11-20 15:25:54.053438] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.504 Initializing NVMe Controllers 00:16:06.504 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.504 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.504 Initialization complete. Launching workers. 00:16:06.504 submit (in ns) avg, min, max = 5341.3, 2818.3, 3998334.2 00:16:06.504 complete (in ns) avg, min, max = 17472.1, 1636.7, 3997343.3 00:16:06.504 00:16:06.504 Submit histogram 00:16:06.504 ================ 00:16:06.504 Range in us Cumulative Count 00:16:06.504 2.813 - 2.827: 0.1428% ( 29) 00:16:06.504 2.827 - 2.840: 0.8323% ( 140) 00:16:06.505 2.840 - 2.853: 2.2754% ( 293) 00:16:06.505 2.853 - 2.867: 4.9498% ( 543) 00:16:06.505 2.867 - 2.880: 10.3970% ( 1106) 00:16:06.505 2.880 - 2.893: 16.9129% ( 1323) 00:16:06.505 2.893 - 2.907: 22.4045% ( 1115) 00:16:06.505 2.907 - 2.920: 28.4180% ( 1221) 00:16:06.505 2.920 - 2.933: 33.4368% ( 1019) 00:16:06.505 2.933 - 2.947: 38.8495% ( 1099) 00:16:06.505 2.947 - 2.960: 44.7104% ( 1190) 00:16:06.505 2.960 - 2.973: 50.4039% ( 1156) 00:16:06.505 2.973 - 2.987: 57.5158% ( 1444) 00:16:06.505 2.987 - 3.000: 65.3960% ( 1600) 00:16:06.505 3.000 - 3.013: 73.8623% ( 1719) 00:16:06.505 3.013 - 3.027: 81.1170% ( 1473) 00:16:06.505 3.027 - 3.040: 87.2981% ( 1255) 00:16:06.505 3.040 - 3.053: 92.6221% ( 1081) 00:16:06.505 3.053 - 3.067: 95.9762% ( 681) 00:16:06.505 3.067 - 3.080: 97.7738% ( 365) 00:16:06.505 3.080 - 3.093: 98.7392% ( 196) 00:16:06.505 3.093 - 3.107: 99.0790% ( 69) 00:16:06.505 3.107 - 3.120: 99.3056% ( 46) 00:16:06.505 3.120 - 3.133: 99.4385% ( 27) 00:16:06.505 3.133 - 3.147: 99.4976% ( 12) 00:16:06.505 3.147 - 3.160: 99.5370% ( 8) 00:16:06.505 3.160 - 3.173: 99.5666% ( 6) 00:16:06.505 3.173 - 3.187: 99.5715% ( 1) 00:16:06.505 3.320 - 3.333: 99.5764% ( 1) 00:16:06.505 3.547 - 3.573: 99.5814% ( 1) 00:16:06.505 3.840 - 3.867: 99.5863% ( 1) 00:16:06.505 3.893 - 3.920: 99.5912% ( 1) 00:16:06.505 3.920 - 3.947: 99.5961% ( 1) 00:16:06.505 4.000 - 4.027: 99.6011% ( 1) 00:16:06.505 4.347 - 4.373: 99.6060% ( 1) 00:16:06.505 4.507 - 4.533: 99.6109% ( 1) 00:16:06.505 4.587 - 4.613: 99.6208% ( 2) 00:16:06.505 4.640 - 4.667: 99.6306% ( 2) 00:16:06.505 4.693 - 4.720: 99.6355% ( 1) 00:16:06.505 4.720 - 4.747: 99.6454% ( 2) 00:16:06.505 4.747 - 4.773: 99.6552% ( 2) 00:16:06.505 4.773 - 4.800: 99.6602% ( 1) 00:16:06.505 4.827 - 4.853: 99.6651% ( 1) 00:16:06.505 4.853 - 4.880: 99.6700% ( 1) 00:16:06.505 4.880 - 4.907: 99.6799% ( 2) 00:16:06.505 4.907 - 4.933: 99.6848% ( 1) 00:16:06.505 4.960 - 4.987: 99.6946% ( 2) 00:16:06.505 4.987 - 5.013: 99.7094% ( 3) 00:16:06.505 5.013 - 5.040: 99.7143% ( 1) 00:16:06.505 5.040 - 5.067: 99.7340% ( 4) 00:16:06.505 5.067 - 5.093: 99.7390% ( 1) 00:16:06.505 5.093 - 5.120: 99.7488% ( 2) 00:16:06.505 5.200 - 5.227: 99.7587% ( 2) 00:16:06.505 5.413 - 5.440: 99.7685% ( 2) 00:16:06.505 5.440 - 5.467: 99.7734% ( 1) 00:16:06.505 5.467 - 5.493: 99.7784% ( 1) 00:16:06.505 5.547 - 5.573: 99.7931% ( 3) 00:16:06.505 5.573 - 5.600: 99.7981% ( 1) 00:16:06.505 5.653 - 5.680: 99.8030% ( 1) 00:16:06.505 5.707 - 5.733: 99.8178% ( 3) 00:16:06.505 5.733 - 5.760: 99.8227% ( 1) 00:16:06.505 5.813 - 5.840: 99.8276% ( 1) 00:16:06.505 5.840 - 5.867: 99.8375% ( 2) 00:16:06.505 5.867 - 5.893: 99.8424% ( 1) 00:16:06.505 5.920 - 5.947: 99.8473% ( 1) 00:16:06.505 5.947 - 5.973: 99.8522% ( 1) 00:16:06.505 5.973 - 6.000: 99.8572% ( 1) 00:16:06.505 6.027 - 6.053: 99.8621% ( 1) 00:16:06.505 6.053 - 6.080: 99.8670% ( 1) 00:16:06.505 6.080 - 6.107: 99.8719% ( 1) 00:16:06.505 6.107 - 6.133: 99.8769% ( 1) 00:16:06.505 6.187 - 6.213: 99.8867% ( 2) 00:16:06.505 6.293 - 6.320: 99.8916% ( 1) 00:16:06.505 6.453 - 6.480: 99.8966% ( 1) 00:16:06.505 6.533 - 6.560: 99.9015% ( 1) 00:16:06.505 6.560 - 6.587: 99.9064% ( 1) 00:16:06.505 6.747 - 6.773: 99.9113% ( 1) 00:16:06.505 6.773 - 6.800: 99.9163% ( 1) 00:16:06.505 6.933 - 6.987: 99.9212% ( 1) 00:16:06.505 [2024-11-20 15:25:55.075092] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.505 7.627 - 7.680: 99.9261% ( 1) 00:16:06.505 10.667 - 10.720: 99.9310% ( 1) 00:16:06.505 13.387 - 13.440: 99.9360% ( 1) 00:16:06.505 128.000 - 128.853: 99.9409% ( 1) 00:16:06.505 3986.773 - 4014.080: 100.0000% ( 12) 00:16:06.505 00:16:06.505 Complete histogram 00:16:06.505 ================== 00:16:06.505 Range in us Cumulative Count 00:16:06.505 1.633 - 1.640: 0.2512% ( 51) 00:16:06.505 1.640 - 1.647: 0.8668% ( 125) 00:16:06.505 1.647 - 1.653: 0.9555% ( 18) 00:16:06.505 1.653 - 1.660: 1.0294% ( 15) 00:16:06.505 1.660 - 1.667: 1.1229% ( 19) 00:16:06.505 1.667 - 1.673: 1.1525% ( 6) 00:16:06.505 1.673 - 1.680: 1.1623% ( 2) 00:16:06.505 1.680 - 1.687: 1.1870% ( 5) 00:16:06.505 1.693 - 1.700: 1.2165% ( 6) 00:16:06.505 1.700 - 1.707: 4.8217% ( 732) 00:16:06.505 1.707 - 1.720: 55.5112% ( 10292) 00:16:06.505 1.720 - 1.733: 72.7591% ( 3502) 00:16:06.505 1.733 - 1.747: 81.1367% ( 1701) 00:16:06.505 1.747 - 1.760: 83.1659% ( 412) 00:16:06.505 1.760 - 1.773: 86.4460% ( 666) 00:16:06.505 1.773 - 1.787: 92.3857% ( 1206) 00:16:06.505 1.787 - 1.800: 96.4588% ( 827) 00:16:06.505 1.800 - 1.813: 98.5422% ( 423) 00:16:06.505 1.813 - 1.827: 99.2218% ( 138) 00:16:06.505 1.827 - 1.840: 99.3893% ( 34) 00:16:06.505 1.840 - 1.853: 99.4139% ( 5) 00:16:06.505 3.320 - 3.333: 99.4188% ( 1) 00:16:06.505 3.347 - 3.360: 99.4238% ( 1) 00:16:06.505 3.440 - 3.467: 99.4287% ( 1) 00:16:06.505 3.467 - 3.493: 99.4385% ( 2) 00:16:06.505 3.627 - 3.653: 99.4435% ( 1) 00:16:06.505 3.653 - 3.680: 99.4484% ( 1) 00:16:06.505 3.707 - 3.733: 99.4533% ( 1) 00:16:06.505 3.813 - 3.840: 99.4582% ( 1) 00:16:06.505 3.867 - 3.893: 99.4632% ( 1) 00:16:06.505 3.947 - 3.973: 99.4730% ( 2) 00:16:06.505 3.973 - 4.000: 99.4779% ( 1) 00:16:06.505 4.053 - 4.080: 99.4829% ( 1) 00:16:06.505 4.133 - 4.160: 99.4878% ( 1) 00:16:06.505 4.160 - 4.187: 99.4927% ( 1) 00:16:06.505 4.187 - 4.213: 99.4976% ( 1) 00:16:06.505 4.373 - 4.400: 99.5026% ( 1) 00:16:06.505 4.507 - 4.533: 99.5075% ( 1) 00:16:06.505 4.533 - 4.560: 99.5124% ( 1) 00:16:06.505 4.560 - 4.587: 99.5173% ( 1) 00:16:06.505 4.587 - 4.613: 99.5272% ( 2) 00:16:06.505 4.613 - 4.640: 99.5370% ( 2) 00:16:06.505 4.720 - 4.747: 99.5420% ( 1) 00:16:06.505 4.747 - 4.773: 99.5469% ( 1) 00:16:06.505 4.827 - 4.853: 99.5518% ( 1) 00:16:06.505 4.907 - 4.933: 99.5617% ( 2) 00:16:06.505 5.040 - 5.067: 99.5666% ( 1) 00:16:06.505 5.200 - 5.227: 99.5715% ( 1) 00:16:06.505 5.333 - 5.360: 99.5764% ( 1) 00:16:06.505 5.627 - 5.653: 99.5814% ( 1) 00:16:06.505 6.000 - 6.027: 99.5863% ( 1) 00:16:06.505 6.400 - 6.427: 99.5912% ( 1) 00:16:06.505 9.067 - 9.120: 99.5961% ( 1) 00:16:06.505 11.573 - 11.627: 99.6011% ( 1) 00:16:06.505 33.493 - 33.707: 99.6060% ( 1) 00:16:06.505 3986.773 - 4014.080: 100.0000% ( 80) 00:16:06.505 00:16:06.505 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:06.505 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:06.505 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:06.505 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:06.505 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.505 [ 00:16:06.505 { 00:16:06.505 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.505 "subtype": "Discovery", 00:16:06.505 "listen_addresses": [], 00:16:06.505 "allow_any_host": true, 00:16:06.505 "hosts": [] 00:16:06.505 }, 00:16:06.505 { 00:16:06.505 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.505 "subtype": "NVMe", 00:16:06.505 "listen_addresses": [ 00:16:06.505 { 00:16:06.505 "trtype": "VFIOUSER", 00:16:06.505 "adrfam": "IPv4", 00:16:06.505 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.505 "trsvcid": "0" 00:16:06.505 } 00:16:06.505 ], 00:16:06.505 "allow_any_host": true, 00:16:06.505 "hosts": [], 00:16:06.505 "serial_number": "SPDK1", 00:16:06.505 "model_number": "SPDK bdev Controller", 00:16:06.505 "max_namespaces": 32, 00:16:06.505 "min_cntlid": 1, 00:16:06.505 "max_cntlid": 65519, 00:16:06.505 "namespaces": [ 00:16:06.505 { 00:16:06.505 "nsid": 1, 00:16:06.505 "bdev_name": "Malloc1", 00:16:06.505 "name": "Malloc1", 00:16:06.505 "nguid": "819E3D8A57AA4631830839E4ECBB19AD", 00:16:06.505 "uuid": "819e3d8a-57aa-4631-8308-39e4ecbb19ad" 00:16:06.505 } 00:16:06.505 ] 00:16:06.505 }, 00:16:06.505 { 00:16:06.505 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.505 "subtype": "NVMe", 00:16:06.505 "listen_addresses": [ 00:16:06.505 { 00:16:06.505 "trtype": "VFIOUSER", 00:16:06.506 "adrfam": "IPv4", 00:16:06.506 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.506 "trsvcid": "0" 00:16:06.506 } 00:16:06.506 ], 00:16:06.506 "allow_any_host": true, 00:16:06.506 "hosts": [], 00:16:06.506 "serial_number": "SPDK2", 00:16:06.506 "model_number": "SPDK bdev Controller", 00:16:06.506 "max_namespaces": 32, 00:16:06.506 "min_cntlid": 1, 00:16:06.506 "max_cntlid": 65519, 00:16:06.506 "namespaces": [ 00:16:06.506 { 00:16:06.506 "nsid": 1, 00:16:06.506 "bdev_name": "Malloc2", 00:16:06.506 "name": "Malloc2", 00:16:06.506 "nguid": "0ABB982B12A042EBAC467A1EC21E3119", 00:16:06.506 "uuid": "0abb982b-12a0-42eb-ac46-7a1ec21e3119" 00:16:06.506 } 00:16:06.506 ] 00:16:06.506 } 00:16:06.506 ] 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=557544 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:06.506 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:06.506 [2024-11-20 15:25:55.446567] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.767 Malloc3 00:16:06.767 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:06.767 [2024-11-20 15:25:55.633851] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.767 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.767 Asynchronous Event Request test 00:16:06.767 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.767 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.767 Registering asynchronous event callbacks... 00:16:06.767 Starting namespace attribute notice tests for all controllers... 00:16:06.767 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:06.767 aer_cb - Changed Namespace 00:16:06.767 Cleaning up... 00:16:07.029 [ 00:16:07.029 { 00:16:07.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.029 "subtype": "Discovery", 00:16:07.029 "listen_addresses": [], 00:16:07.029 "allow_any_host": true, 00:16:07.029 "hosts": [] 00:16:07.029 }, 00:16:07.029 { 00:16:07.029 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:07.029 "subtype": "NVMe", 00:16:07.029 "listen_addresses": [ 00:16:07.029 { 00:16:07.029 "trtype": "VFIOUSER", 00:16:07.029 "adrfam": "IPv4", 00:16:07.029 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:07.029 "trsvcid": "0" 00:16:07.029 } 00:16:07.029 ], 00:16:07.029 "allow_any_host": true, 00:16:07.029 "hosts": [], 00:16:07.029 "serial_number": "SPDK1", 00:16:07.029 "model_number": "SPDK bdev Controller", 00:16:07.029 "max_namespaces": 32, 00:16:07.029 "min_cntlid": 1, 00:16:07.029 "max_cntlid": 65519, 00:16:07.029 "namespaces": [ 00:16:07.029 { 00:16:07.029 "nsid": 1, 00:16:07.029 "bdev_name": "Malloc1", 00:16:07.029 "name": "Malloc1", 00:16:07.029 "nguid": "819E3D8A57AA4631830839E4ECBB19AD", 00:16:07.029 "uuid": "819e3d8a-57aa-4631-8308-39e4ecbb19ad" 00:16:07.029 }, 00:16:07.029 { 00:16:07.029 "nsid": 2, 00:16:07.029 "bdev_name": "Malloc3", 00:16:07.029 "name": "Malloc3", 00:16:07.029 "nguid": "616ACC40F1D64FC7AEAF83EBAA0A1541", 00:16:07.029 "uuid": "616acc40-f1d6-4fc7-aeaf-83ebaa0a1541" 00:16:07.029 } 00:16:07.029 ] 00:16:07.029 }, 00:16:07.029 { 00:16:07.029 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:07.029 "subtype": "NVMe", 00:16:07.029 "listen_addresses": [ 00:16:07.029 { 00:16:07.029 "trtype": "VFIOUSER", 00:16:07.029 "adrfam": "IPv4", 00:16:07.029 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:07.029 "trsvcid": "0" 00:16:07.029 } 00:16:07.029 ], 00:16:07.029 "allow_any_host": true, 00:16:07.029 "hosts": [], 00:16:07.029 "serial_number": "SPDK2", 00:16:07.029 "model_number": "SPDK bdev Controller", 00:16:07.029 "max_namespaces": 32, 00:16:07.029 "min_cntlid": 1, 00:16:07.029 "max_cntlid": 65519, 00:16:07.029 "namespaces": [ 00:16:07.029 { 00:16:07.029 "nsid": 1, 00:16:07.029 "bdev_name": "Malloc2", 00:16:07.029 "name": "Malloc2", 00:16:07.029 "nguid": "0ABB982B12A042EBAC467A1EC21E3119", 00:16:07.029 "uuid": "0abb982b-12a0-42eb-ac46-7a1ec21e3119" 00:16:07.029 } 00:16:07.029 ] 00:16:07.029 } 00:16:07.029 ] 00:16:07.029 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 557544 00:16:07.029 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.029 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:07.029 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:07.029 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:07.029 [2024-11-20 15:25:55.876620] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:16:07.029 [2024-11-20 15:25:55.876692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557594 ] 00:16:07.029 [2024-11-20 15:25:55.916416] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:07.029 [2024-11-20 15:25:55.921603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.029 [2024-11-20 15:25:55.921623] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8f14547000 00:16:07.029 [2024-11-20 15:25:55.922603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.923613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.924617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.925622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.926624] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.927633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.928639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.929647] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.029 [2024-11-20 15:25:55.930653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.029 [2024-11-20 15:25:55.930661] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8f1453c000 00:16:07.029 [2024-11-20 15:25:55.931573] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:07.029 [2024-11-20 15:25:55.940952] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:07.029 [2024-11-20 15:25:55.940973] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:07.029 [2024-11-20 15:25:55.946052] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:07.029 [2024-11-20 15:25:55.946088] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:07.029 [2024-11-20 15:25:55.946149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:07.029 [2024-11-20 15:25:55.946166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:07.029 [2024-11-20 15:25:55.946173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:07.029 [2024-11-20 15:25:55.947059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:07.029 [2024-11-20 15:25:55.947068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:07.029 [2024-11-20 15:25:55.947073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:07.029 [2024-11-20 15:25:55.948069] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:07.029 [2024-11-20 15:25:55.948076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:07.029 [2024-11-20 15:25:55.948082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:07.029 [2024-11-20 15:25:55.949078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:07.030 [2024-11-20 15:25:55.949085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:07.030 [2024-11-20 15:25:55.950085] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:07.030 [2024-11-20 15:25:55.950092] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:07.030 [2024-11-20 15:25:55.950096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:07.030 [2024-11-20 15:25:55.950101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:07.030 [2024-11-20 15:25:55.950208] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:07.030 [2024-11-20 15:25:55.950211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:07.030 [2024-11-20 15:25:55.950215] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:07.030 [2024-11-20 15:25:55.951089] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:07.030 [2024-11-20 15:25:55.952096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:07.030 [2024-11-20 15:25:55.953109] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:07.030 [2024-11-20 15:25:55.954112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.030 [2024-11-20 15:25:55.954146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:07.030 [2024-11-20 15:25:55.955121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:07.030 [2024-11-20 15:25:55.955128] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:07.030 [2024-11-20 15:25:55.955132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.955149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:07.030 [2024-11-20 15:25:55.955154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.955168] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.030 [2024-11-20 15:25:55.955172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.030 [2024-11-20 15:25:55.955176] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.030 [2024-11-20 15:25:55.955186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.030 [2024-11-20 15:25:55.963167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:07.030 [2024-11-20 15:25:55.963176] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:07.030 [2024-11-20 15:25:55.963180] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:07.030 [2024-11-20 15:25:55.963183] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:07.030 [2024-11-20 15:25:55.963187] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:07.030 [2024-11-20 15:25:55.963192] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:07.030 [2024-11-20 15:25:55.963195] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:07.030 [2024-11-20 15:25:55.963199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.963206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.963214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:07.030 [2024-11-20 15:25:55.971163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:07.030 [2024-11-20 15:25:55.971173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.030 [2024-11-20 15:25:55.971180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.030 [2024-11-20 15:25:55.971186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.030 [2024-11-20 15:25:55.971192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.030 [2024-11-20 15:25:55.971195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.971201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.971207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:07.030 [2024-11-20 15:25:55.979164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:07.030 [2024-11-20 15:25:55.979172] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:07.030 [2024-11-20 15:25:55.979178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.979183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.979187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.979194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:07.030 [2024-11-20 15:25:55.987163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:07.030 [2024-11-20 15:25:55.987211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.987217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:07.030 [2024-11-20 15:25:55.987222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:07.030 [2024-11-20 15:25:55.987225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:07.030 [2024-11-20 15:25:55.987228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.030 [2024-11-20 15:25:55.987233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:07.293 [2024-11-20 15:25:55.995164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:07.293 [2024-11-20 15:25:55.995173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:07.293 [2024-11-20 15:25:55.995184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:55.995189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:55.995194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.293 [2024-11-20 15:25:55.995198] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.293 [2024-11-20 15:25:55.995200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.293 [2024-11-20 15:25:55.995204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.293 [2024-11-20 15:25:56.003164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:07.293 [2024-11-20 15:25:56.003178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.003184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.003189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.293 [2024-11-20 15:25:56.003192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.293 [2024-11-20 15:25:56.003195] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.293 [2024-11-20 15:25:56.003199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.293 [2024-11-20 15:25:56.011164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:07.293 [2024-11-20 15:25:56.011172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011199] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:07.293 [2024-11-20 15:25:56.011203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:07.293 [2024-11-20 15:25:56.011206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:07.293 [2024-11-20 15:25:56.011220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:07.293 [2024-11-20 15:25:56.019163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:07.293 [2024-11-20 15:25:56.019174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:07.293 [2024-11-20 15:25:56.027164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:07.294 [2024-11-20 15:25:56.027175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:07.294 [2024-11-20 15:25:56.035164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:07.294 [2024-11-20 15:25:56.035174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:07.294 [2024-11-20 15:25:56.043164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:07.294 [2024-11-20 15:25:56.043178] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:07.294 [2024-11-20 15:25:56.043181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:07.294 [2024-11-20 15:25:56.043184] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:07.294 [2024-11-20 15:25:56.043186] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:07.294 [2024-11-20 15:25:56.043189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:07.294 [2024-11-20 15:25:56.043194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:07.294 [2024-11-20 15:25:56.043199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:07.294 [2024-11-20 15:25:56.043202] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:07.294 [2024-11-20 15:25:56.043206] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.294 [2024-11-20 15:25:56.043211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:07.294 [2024-11-20 15:25:56.043216] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:07.294 [2024-11-20 15:25:56.043220] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.294 [2024-11-20 15:25:56.043222] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.294 [2024-11-20 15:25:56.043226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.294 [2024-11-20 15:25:56.043232] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:07.294 [2024-11-20 15:25:56.043235] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:07.294 [2024-11-20 15:25:56.043237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.294 [2024-11-20 15:25:56.043241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:07.294 [2024-11-20 15:25:56.051165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:07.294 [2024-11-20 15:25:56.051176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:07.294 [2024-11-20 15:25:56.051184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:07.294 [2024-11-20 15:25:56.051189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:07.294 ===================================================== 00:16:07.294 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.294 ===================================================== 00:16:07.294 Controller Capabilities/Features 00:16:07.294 ================================ 00:16:07.294 Vendor ID: 4e58 00:16:07.294 Subsystem Vendor ID: 4e58 00:16:07.294 Serial Number: SPDK2 00:16:07.294 Model Number: SPDK bdev Controller 00:16:07.294 Firmware Version: 25.01 00:16:07.294 Recommended Arb Burst: 6 00:16:07.294 IEEE OUI Identifier: 8d 6b 50 00:16:07.294 Multi-path I/O 00:16:07.294 May have multiple subsystem ports: Yes 00:16:07.294 May have multiple controllers: Yes 00:16:07.294 Associated with SR-IOV VF: No 00:16:07.294 Max Data Transfer Size: 131072 00:16:07.294 Max Number of Namespaces: 32 00:16:07.294 Max Number of I/O Queues: 127 00:16:07.294 NVMe Specification Version (VS): 1.3 00:16:07.294 NVMe Specification Version (Identify): 1.3 00:16:07.294 Maximum Queue Entries: 256 00:16:07.294 Contiguous Queues Required: Yes 00:16:07.294 Arbitration Mechanisms Supported 00:16:07.294 Weighted Round Robin: Not Supported 00:16:07.294 Vendor Specific: Not Supported 00:16:07.294 Reset Timeout: 15000 ms 00:16:07.294 Doorbell Stride: 4 bytes 00:16:07.294 NVM Subsystem Reset: Not Supported 00:16:07.294 Command Sets Supported 00:16:07.294 NVM Command Set: Supported 00:16:07.294 Boot Partition: Not Supported 00:16:07.294 Memory Page Size Minimum: 4096 bytes 00:16:07.294 Memory Page Size Maximum: 4096 bytes 00:16:07.294 Persistent Memory Region: Not Supported 00:16:07.294 Optional Asynchronous Events Supported 00:16:07.294 Namespace Attribute Notices: Supported 00:16:07.294 Firmware Activation Notices: Not Supported 00:16:07.294 ANA Change Notices: Not Supported 00:16:07.294 PLE Aggregate Log Change Notices: Not Supported 00:16:07.294 LBA Status Info Alert Notices: Not Supported 00:16:07.294 EGE Aggregate Log Change Notices: Not Supported 00:16:07.294 Normal NVM Subsystem Shutdown event: Not Supported 00:16:07.294 Zone Descriptor Change Notices: Not Supported 00:16:07.294 Discovery Log Change Notices: Not Supported 00:16:07.294 Controller Attributes 00:16:07.294 128-bit Host Identifier: Supported 00:16:07.294 Non-Operational Permissive Mode: Not Supported 00:16:07.294 NVM Sets: Not Supported 00:16:07.294 Read Recovery Levels: Not Supported 00:16:07.294 Endurance Groups: Not Supported 00:16:07.294 Predictable Latency Mode: Not Supported 00:16:07.294 Traffic Based Keep ALive: Not Supported 00:16:07.294 Namespace Granularity: Not Supported 00:16:07.294 SQ Associations: Not Supported 00:16:07.294 UUID List: Not Supported 00:16:07.294 Multi-Domain Subsystem: Not Supported 00:16:07.294 Fixed Capacity Management: Not Supported 00:16:07.294 Variable Capacity Management: Not Supported 00:16:07.294 Delete Endurance Group: Not Supported 00:16:07.294 Delete NVM Set: Not Supported 00:16:07.294 Extended LBA Formats Supported: Not Supported 00:16:07.294 Flexible Data Placement Supported: Not Supported 00:16:07.294 00:16:07.294 Controller Memory Buffer Support 00:16:07.294 ================================ 00:16:07.294 Supported: No 00:16:07.294 00:16:07.294 Persistent Memory Region Support 00:16:07.294 ================================ 00:16:07.294 Supported: No 00:16:07.294 00:16:07.294 Admin Command Set Attributes 00:16:07.294 ============================ 00:16:07.294 Security Send/Receive: Not Supported 00:16:07.294 Format NVM: Not Supported 00:16:07.294 Firmware Activate/Download: Not Supported 00:16:07.294 Namespace Management: Not Supported 00:16:07.294 Device Self-Test: Not Supported 00:16:07.294 Directives: Not Supported 00:16:07.294 NVMe-MI: Not Supported 00:16:07.294 Virtualization Management: Not Supported 00:16:07.294 Doorbell Buffer Config: Not Supported 00:16:07.294 Get LBA Status Capability: Not Supported 00:16:07.294 Command & Feature Lockdown Capability: Not Supported 00:16:07.294 Abort Command Limit: 4 00:16:07.294 Async Event Request Limit: 4 00:16:07.294 Number of Firmware Slots: N/A 00:16:07.294 Firmware Slot 1 Read-Only: N/A 00:16:07.294 Firmware Activation Without Reset: N/A 00:16:07.294 Multiple Update Detection Support: N/A 00:16:07.294 Firmware Update Granularity: No Information Provided 00:16:07.294 Per-Namespace SMART Log: No 00:16:07.294 Asymmetric Namespace Access Log Page: Not Supported 00:16:07.294 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:07.294 Command Effects Log Page: Supported 00:16:07.294 Get Log Page Extended Data: Supported 00:16:07.294 Telemetry Log Pages: Not Supported 00:16:07.294 Persistent Event Log Pages: Not Supported 00:16:07.294 Supported Log Pages Log Page: May Support 00:16:07.294 Commands Supported & Effects Log Page: Not Supported 00:16:07.294 Feature Identifiers & Effects Log Page:May Support 00:16:07.294 NVMe-MI Commands & Effects Log Page: May Support 00:16:07.294 Data Area 4 for Telemetry Log: Not Supported 00:16:07.294 Error Log Page Entries Supported: 128 00:16:07.294 Keep Alive: Supported 00:16:07.294 Keep Alive Granularity: 10000 ms 00:16:07.294 00:16:07.294 NVM Command Set Attributes 00:16:07.294 ========================== 00:16:07.294 Submission Queue Entry Size 00:16:07.294 Max: 64 00:16:07.294 Min: 64 00:16:07.294 Completion Queue Entry Size 00:16:07.294 Max: 16 00:16:07.294 Min: 16 00:16:07.294 Number of Namespaces: 32 00:16:07.294 Compare Command: Supported 00:16:07.294 Write Uncorrectable Command: Not Supported 00:16:07.294 Dataset Management Command: Supported 00:16:07.294 Write Zeroes Command: Supported 00:16:07.294 Set Features Save Field: Not Supported 00:16:07.294 Reservations: Not Supported 00:16:07.294 Timestamp: Not Supported 00:16:07.294 Copy: Supported 00:16:07.294 Volatile Write Cache: Present 00:16:07.294 Atomic Write Unit (Normal): 1 00:16:07.294 Atomic Write Unit (PFail): 1 00:16:07.294 Atomic Compare & Write Unit: 1 00:16:07.294 Fused Compare & Write: Supported 00:16:07.294 Scatter-Gather List 00:16:07.294 SGL Command Set: Supported (Dword aligned) 00:16:07.294 SGL Keyed: Not Supported 00:16:07.294 SGL Bit Bucket Descriptor: Not Supported 00:16:07.295 SGL Metadata Pointer: Not Supported 00:16:07.295 Oversized SGL: Not Supported 00:16:07.295 SGL Metadata Address: Not Supported 00:16:07.295 SGL Offset: Not Supported 00:16:07.295 Transport SGL Data Block: Not Supported 00:16:07.295 Replay Protected Memory Block: Not Supported 00:16:07.295 00:16:07.295 Firmware Slot Information 00:16:07.295 ========================= 00:16:07.295 Active slot: 1 00:16:07.295 Slot 1 Firmware Revision: 25.01 00:16:07.295 00:16:07.295 00:16:07.295 Commands Supported and Effects 00:16:07.295 ============================== 00:16:07.295 Admin Commands 00:16:07.295 -------------- 00:16:07.295 Get Log Page (02h): Supported 00:16:07.295 Identify (06h): Supported 00:16:07.295 Abort (08h): Supported 00:16:07.295 Set Features (09h): Supported 00:16:07.295 Get Features (0Ah): Supported 00:16:07.295 Asynchronous Event Request (0Ch): Supported 00:16:07.295 Keep Alive (18h): Supported 00:16:07.295 I/O Commands 00:16:07.295 ------------ 00:16:07.295 Flush (00h): Supported LBA-Change 00:16:07.295 Write (01h): Supported LBA-Change 00:16:07.295 Read (02h): Supported 00:16:07.295 Compare (05h): Supported 00:16:07.295 Write Zeroes (08h): Supported LBA-Change 00:16:07.295 Dataset Management (09h): Supported LBA-Change 00:16:07.295 Copy (19h): Supported LBA-Change 00:16:07.295 00:16:07.295 Error Log 00:16:07.295 ========= 00:16:07.295 00:16:07.295 Arbitration 00:16:07.295 =========== 00:16:07.295 Arbitration Burst: 1 00:16:07.295 00:16:07.295 Power Management 00:16:07.295 ================ 00:16:07.295 Number of Power States: 1 00:16:07.295 Current Power State: Power State #0 00:16:07.295 Power State #0: 00:16:07.295 Max Power: 0.00 W 00:16:07.295 Non-Operational State: Operational 00:16:07.295 Entry Latency: Not Reported 00:16:07.295 Exit Latency: Not Reported 00:16:07.295 Relative Read Throughput: 0 00:16:07.295 Relative Read Latency: 0 00:16:07.295 Relative Write Throughput: 0 00:16:07.295 Relative Write Latency: 0 00:16:07.295 Idle Power: Not Reported 00:16:07.295 Active Power: Not Reported 00:16:07.295 Non-Operational Permissive Mode: Not Supported 00:16:07.295 00:16:07.295 Health Information 00:16:07.295 ================== 00:16:07.295 Critical Warnings: 00:16:07.295 Available Spare Space: OK 00:16:07.295 Temperature: OK 00:16:07.295 Device Reliability: OK 00:16:07.295 Read Only: No 00:16:07.295 Volatile Memory Backup: OK 00:16:07.295 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:07.295 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:07.295 Available Spare: 0% 00:16:07.295 Available Sp[2024-11-20 15:25:56.051262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:07.295 [2024-11-20 15:25:56.059164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:07.295 [2024-11-20 15:25:56.059187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:07.295 [2024-11-20 15:25:56.059194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.295 [2024-11-20 15:25:56.059199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.295 [2024-11-20 15:25:56.059204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.295 [2024-11-20 15:25:56.059208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.295 [2024-11-20 15:25:56.059239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:07.295 [2024-11-20 15:25:56.059246] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:07.295 [2024-11-20 15:25:56.060250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.295 [2024-11-20 15:25:56.060288] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:07.295 [2024-11-20 15:25:56.060294] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:07.295 [2024-11-20 15:25:56.061252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:07.295 [2024-11-20 15:25:56.061261] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:07.295 [2024-11-20 15:25:56.061307] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:07.295 [2024-11-20 15:25:56.062274] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:07.295 are Threshold: 0% 00:16:07.295 Life Percentage Used: 0% 00:16:07.295 Data Units Read: 0 00:16:07.295 Data Units Written: 0 00:16:07.295 Host Read Commands: 0 00:16:07.295 Host Write Commands: 0 00:16:07.295 Controller Busy Time: 0 minutes 00:16:07.295 Power Cycles: 0 00:16:07.295 Power On Hours: 0 hours 00:16:07.295 Unsafe Shutdowns: 0 00:16:07.295 Unrecoverable Media Errors: 0 00:16:07.295 Lifetime Error Log Entries: 0 00:16:07.295 Warning Temperature Time: 0 minutes 00:16:07.295 Critical Temperature Time: 0 minutes 00:16:07.295 00:16:07.295 Number of Queues 00:16:07.295 ================ 00:16:07.295 Number of I/O Submission Queues: 127 00:16:07.295 Number of I/O Completion Queues: 127 00:16:07.295 00:16:07.295 Active Namespaces 00:16:07.295 ================= 00:16:07.295 Namespace ID:1 00:16:07.295 Error Recovery Timeout: Unlimited 00:16:07.295 Command Set Identifier: NVM (00h) 00:16:07.295 Deallocate: Supported 00:16:07.295 Deallocated/Unwritten Error: Not Supported 00:16:07.295 Deallocated Read Value: Unknown 00:16:07.295 Deallocate in Write Zeroes: Not Supported 00:16:07.295 Deallocated Guard Field: 0xFFFF 00:16:07.295 Flush: Supported 00:16:07.295 Reservation: Supported 00:16:07.295 Namespace Sharing Capabilities: Multiple Controllers 00:16:07.295 Size (in LBAs): 131072 (0GiB) 00:16:07.295 Capacity (in LBAs): 131072 (0GiB) 00:16:07.295 Utilization (in LBAs): 131072 (0GiB) 00:16:07.295 NGUID: 0ABB982B12A042EBAC467A1EC21E3119 00:16:07.295 UUID: 0abb982b-12a0-42eb-ac46-7a1ec21e3119 00:16:07.295 Thin Provisioning: Not Supported 00:16:07.295 Per-NS Atomic Units: Yes 00:16:07.295 Atomic Boundary Size (Normal): 0 00:16:07.295 Atomic Boundary Size (PFail): 0 00:16:07.295 Atomic Boundary Offset: 0 00:16:07.295 Maximum Single Source Range Length: 65535 00:16:07.295 Maximum Copy Length: 65535 00:16:07.295 Maximum Source Range Count: 1 00:16:07.295 NGUID/EUI64 Never Reused: No 00:16:07.295 Namespace Write Protected: No 00:16:07.295 Number of LBA Formats: 1 00:16:07.295 Current LBA Format: LBA Format #00 00:16:07.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:07.295 00:16:07.295 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:07.295 [2024-11-20 15:25:56.249534] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.589 Initializing NVMe Controllers 00:16:12.589 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.589 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:12.589 Initialization complete. Launching workers. 00:16:12.589 ======================================================== 00:16:12.589 Latency(us) 00:16:12.589 Device Information : IOPS MiB/s Average min max 00:16:12.589 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40003.40 156.26 3199.80 842.38 9087.90 00:16:12.589 ======================================================== 00:16:12.589 Total : 40003.40 156.26 3199.80 842.38 9087.90 00:16:12.589 00:16:12.589 [2024-11-20 15:26:01.354352] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.589 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:12.589 [2024-11-20 15:26:01.543938] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.876 Initializing NVMe Controllers 00:16:17.876 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.876 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:17.876 Initialization complete. Launching workers. 00:16:17.876 ======================================================== 00:16:17.876 Latency(us) 00:16:17.876 Device Information : IOPS MiB/s Average min max 00:16:17.876 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40042.14 156.41 3196.31 843.29 9776.76 00:16:17.876 ======================================================== 00:16:17.876 Total : 40042.14 156.41 3196.31 843.29 9776.76 00:16:17.876 00:16:17.876 [2024-11-20 15:26:06.561682] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.876 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:17.876 [2024-11-20 15:26:06.761861] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.172 [2024-11-20 15:26:11.910240] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.172 Initializing NVMe Controllers 00:16:23.172 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:23.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:23.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:23.172 Initialization complete. Launching workers. 00:16:23.172 Starting thread on core 2 00:16:23.172 Starting thread on core 3 00:16:23.172 Starting thread on core 1 00:16:23.172 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:23.433 [2024-11-20 15:26:12.156538] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.733 [2024-11-20 15:26:15.234684] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.733 Initializing NVMe Controllers 00:16:26.733 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.733 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.733 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:26.733 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:26.733 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:26.733 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:26.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:26.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:26.733 Initialization complete. Launching workers. 00:16:26.733 Starting thread on core 1 with urgent priority queue 00:16:26.733 Starting thread on core 2 with urgent priority queue 00:16:26.733 Starting thread on core 3 with urgent priority queue 00:16:26.733 Starting thread on core 0 with urgent priority queue 00:16:26.733 SPDK bdev Controller (SPDK2 ) core 0: 13172.00 IO/s 7.59 secs/100000 ios 00:16:26.733 SPDK bdev Controller (SPDK2 ) core 1: 13908.67 IO/s 7.19 secs/100000 ios 00:16:26.733 SPDK bdev Controller (SPDK2 ) core 2: 8020.67 IO/s 12.47 secs/100000 ios 00:16:26.733 SPDK bdev Controller (SPDK2 ) core 3: 15292.67 IO/s 6.54 secs/100000 ios 00:16:26.733 ======================================================== 00:16:26.733 00:16:26.733 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:26.733 [2024-11-20 15:26:15.480527] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.733 Initializing NVMe Controllers 00:16:26.733 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.733 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.733 Namespace ID: 1 size: 0GB 00:16:26.733 Initialization complete. 00:16:26.733 INFO: using host memory buffer for IO 00:16:26.733 Hello world! 00:16:26.733 [2024-11-20 15:26:15.489607] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.733 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:26.994 [2024-11-20 15:26:15.720769] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.938 Initializing NVMe Controllers 00:16:27.938 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:27.938 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:27.938 Initialization complete. Launching workers. 00:16:27.938 submit (in ns) avg, min, max = 5644.2, 2815.8, 6990649.2 00:16:27.938 complete (in ns) avg, min, max = 17338.3, 1633.3, 4992845.8 00:16:27.938 00:16:27.938 Submit histogram 00:16:27.938 ================ 00:16:27.938 Range in us Cumulative Count 00:16:27.938 2.813 - 2.827: 0.5416% ( 111) 00:16:27.938 2.827 - 2.840: 2.0592% ( 311) 00:16:27.938 2.840 - 2.853: 5.5238% ( 710) 00:16:27.938 2.853 - 2.867: 10.5451% ( 1029) 00:16:27.938 2.867 - 2.880: 15.5858% ( 1033) 00:16:27.938 2.880 - 2.893: 19.8458% ( 873) 00:16:27.938 2.893 - 2.907: 25.0525% ( 1067) 00:16:27.938 2.907 - 2.920: 30.5080% ( 1118) 00:16:27.938 2.920 - 2.933: 37.0810% ( 1347) 00:16:27.938 2.933 - 2.947: 42.0973% ( 1028) 00:16:27.938 2.947 - 2.960: 47.3967% ( 1086) 00:16:27.938 2.960 - 2.973: 52.6716% ( 1081) 00:16:27.938 2.973 - 2.987: 60.7037% ( 1646) 00:16:27.938 2.987 - 3.000: 69.6726% ( 1838) 00:16:27.938 3.000 - 3.013: 78.3438% ( 1777) 00:16:27.938 3.013 - 3.027: 85.9074% ( 1550) 00:16:27.938 3.027 - 3.040: 91.6508% ( 1177) 00:16:27.938 3.040 - 3.053: 95.2032% ( 728) 00:16:27.938 3.053 - 3.067: 97.4674% ( 464) 00:16:27.938 3.067 - 3.080: 98.5361% ( 219) 00:16:27.938 3.080 - 3.093: 99.1997% ( 136) 00:16:27.938 3.093 - 3.107: 99.4486% ( 51) 00:16:27.938 3.107 - 3.120: 99.5413% ( 19) 00:16:27.938 3.120 - 3.133: 99.5706% ( 6) 00:16:27.938 3.133 - 3.147: 99.5852% ( 3) 00:16:27.938 3.147 - 3.160: 99.5901% ( 1) 00:16:27.938 3.173 - 3.187: 99.5950% ( 1) 00:16:27.938 3.187 - 3.200: 99.5999% ( 1) 00:16:27.938 3.200 - 3.213: 99.6096% ( 2) 00:16:27.938 3.520 - 3.547: 99.6145% ( 1) 00:16:27.938 3.547 - 3.573: 99.6194% ( 1) 00:16:27.938 3.573 - 3.600: 99.6243% ( 1) 00:16:27.938 3.627 - 3.653: 99.6291% ( 1) 00:16:27.938 3.733 - 3.760: 99.6340% ( 1) 00:16:27.938 3.787 - 3.813: 99.6389% ( 1) 00:16:27.938 3.840 - 3.867: 99.6438% ( 1) 00:16:27.938 3.867 - 3.893: 99.6487% ( 1) 00:16:27.938 3.893 - 3.920: 99.6584% ( 2) 00:16:27.938 4.000 - 4.027: 99.6633% ( 1) 00:16:27.938 4.480 - 4.507: 99.6731% ( 2) 00:16:27.938 4.560 - 4.587: 99.6779% ( 1) 00:16:27.938 4.587 - 4.613: 99.6828% ( 1) 00:16:27.938 4.613 - 4.640: 99.6877% ( 1) 00:16:27.938 4.640 - 4.667: 99.6975% ( 2) 00:16:27.938 4.667 - 4.693: 99.7023% ( 1) 00:16:27.938 4.720 - 4.747: 99.7121% ( 2) 00:16:27.938 4.747 - 4.773: 99.7170% ( 1) 00:16:27.938 4.773 - 4.800: 99.7267% ( 2) 00:16:27.938 4.827 - 4.853: 99.7414% ( 3) 00:16:27.938 4.853 - 4.880: 99.7511% ( 2) 00:16:27.938 4.933 - 4.960: 99.7609% ( 2) 00:16:27.938 4.987 - 5.013: 99.7658% ( 1) 00:16:27.938 5.040 - 5.067: 99.7707% ( 1) 00:16:27.938 5.093 - 5.120: 99.7755% ( 1) 00:16:27.938 5.120 - 5.147: 99.7804% ( 1) 00:16:27.938 5.147 - 5.173: 99.7999% ( 4) 00:16:27.938 5.173 - 5.200: 99.8048% ( 1) 00:16:27.938 5.200 - 5.227: 99.8097% ( 1) 00:16:27.938 5.280 - 5.307: 99.8146% ( 1) 00:16:27.938 5.333 - 5.360: 99.8195% ( 1) 00:16:27.938 5.360 - 5.387: 99.8243% ( 1) 00:16:27.938 5.493 - 5.520: 99.8341% ( 2) 00:16:27.938 5.707 - 5.733: 99.8487% ( 3) 00:16:27.938 5.760 - 5.787: 99.8536% ( 1) 00:16:27.938 5.840 - 5.867: 99.8585% ( 1) 00:16:27.938 5.867 - 5.893: 99.8634% ( 1) 00:16:27.938 5.893 - 5.920: 99.8682% ( 1) 00:16:27.938 5.947 - 5.973: 99.8731% ( 1) 00:16:27.938 5.973 - 6.000: 99.8780% ( 1) 00:16:27.938 6.160 - 6.187: 99.8829% ( 1) 00:16:27.938 6.267 - 6.293: 99.8878% ( 1) 00:16:27.938 6.347 - 6.373: 99.9024% ( 3) 00:16:27.938 6.400 - 6.427: 99.9073% ( 1) 00:16:27.938 6.533 - 6.560: 99.9122% ( 1) 00:16:27.938 6.827 - 6.880: 99.9170% ( 1) 00:16:27.938 6.933 - 6.987: 99.9219% ( 1) 00:16:27.938 7.253 - 7.307: 99.9268% ( 1) 00:16:27.938 7.573 - 7.627: 99.9317% ( 1) 00:16:27.938 13.280 - 13.333: 99.9366% ( 1) 00:16:27.938 [2024-11-20 15:26:16.814676] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.938 3986.773 - 4014.080: 99.9951% ( 12) 00:16:27.938 6990.507 - 7045.120: 100.0000% ( 1) 00:16:27.938 00:16:27.938 Complete histogram 00:16:27.938 ================== 00:16:27.938 Range in us Cumulative Count 00:16:27.938 1.633 - 1.640: 0.0098% ( 2) 00:16:27.938 1.640 - 1.647: 0.1122% ( 21) 00:16:27.938 1.647 - 1.653: 0.6441% ( 109) 00:16:27.938 1.653 - 1.660: 0.6880% ( 9) 00:16:27.939 1.660 - 1.667: 0.7466% ( 12) 00:16:27.939 1.667 - 1.673: 0.8637% ( 24) 00:16:27.939 1.673 - 1.680: 0.9076% ( 9) 00:16:27.939 1.680 - 1.687: 0.9223% ( 3) 00:16:27.939 1.687 - 1.693: 1.2248% ( 62) 00:16:27.939 1.693 - 1.700: 39.5355% ( 7851) 00:16:27.939 1.700 - 1.707: 48.9533% ( 1930) 00:16:27.939 1.707 - 1.720: 69.9166% ( 4296) 00:16:27.939 1.720 - 1.733: 80.7642% ( 2223) 00:16:27.939 1.733 - 1.747: 83.8921% ( 641) 00:16:27.939 1.747 - 1.760: 86.2538% ( 484) 00:16:27.939 1.760 - 1.773: 90.7725% ( 926) 00:16:27.939 1.773 - 1.787: 95.4570% ( 960) 00:16:27.939 1.787 - 1.800: 97.8920% ( 499) 00:16:27.939 1.800 - 1.813: 99.1217% ( 252) 00:16:27.939 1.813 - 1.827: 99.4242% ( 62) 00:16:27.939 1.827 - 1.840: 99.4779% ( 11) 00:16:27.939 1.853 - 1.867: 99.4876% ( 2) 00:16:27.939 3.267 - 3.280: 99.4925% ( 1) 00:16:27.939 3.320 - 3.333: 99.4974% ( 1) 00:16:27.939 3.373 - 3.387: 99.5023% ( 1) 00:16:27.939 3.520 - 3.547: 99.5071% ( 1) 00:16:27.939 3.627 - 3.653: 99.5120% ( 1) 00:16:27.939 3.680 - 3.707: 99.5169% ( 1) 00:16:27.939 3.787 - 3.813: 99.5218% ( 1) 00:16:27.939 3.867 - 3.893: 99.5267% ( 1) 00:16:27.939 4.053 - 4.080: 99.5315% ( 1) 00:16:27.939 4.133 - 4.160: 99.5364% ( 1) 00:16:27.939 4.160 - 4.187: 99.5413% ( 1) 00:16:27.939 4.240 - 4.267: 99.5511% ( 2) 00:16:27.939 4.373 - 4.400: 99.5559% ( 1) 00:16:27.939 4.400 - 4.427: 99.5608% ( 1) 00:16:27.939 4.613 - 4.640: 99.5657% ( 1) 00:16:27.939 4.640 - 4.667: 99.5706% ( 1) 00:16:27.939 4.747 - 4.773: 99.5755% ( 1) 00:16:27.939 4.907 - 4.933: 99.5803% ( 1) 00:16:27.939 5.093 - 5.120: 99.5901% ( 2) 00:16:27.939 5.467 - 5.493: 99.5999% ( 2) 00:16:27.939 33.920 - 34.133: 99.6047% ( 1) 00:16:27.939 436.907 - 440.320: 99.6096% ( 1) 00:16:27.939 3072.000 - 3085.653: 99.6145% ( 1) 00:16:27.939 3986.773 - 4014.080: 99.9951% ( 78) 00:16:27.939 4969.813 - 4997.120: 100.0000% ( 1) 00:16:27.939 00:16:27.939 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:27.939 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:27.939 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:27.939 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:27.939 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:28.200 [ 00:16:28.200 { 00:16:28.200 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:28.200 "subtype": "Discovery", 00:16:28.200 "listen_addresses": [], 00:16:28.200 "allow_any_host": true, 00:16:28.200 "hosts": [] 00:16:28.200 }, 00:16:28.200 { 00:16:28.200 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:28.200 "subtype": "NVMe", 00:16:28.200 "listen_addresses": [ 00:16:28.200 { 00:16:28.200 "trtype": "VFIOUSER", 00:16:28.200 "adrfam": "IPv4", 00:16:28.200 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:28.200 "trsvcid": "0" 00:16:28.200 } 00:16:28.200 ], 00:16:28.200 "allow_any_host": true, 00:16:28.200 "hosts": [], 00:16:28.200 "serial_number": "SPDK1", 00:16:28.200 "model_number": "SPDK bdev Controller", 00:16:28.200 "max_namespaces": 32, 00:16:28.200 "min_cntlid": 1, 00:16:28.200 "max_cntlid": 65519, 00:16:28.200 "namespaces": [ 00:16:28.200 { 00:16:28.200 "nsid": 1, 00:16:28.200 "bdev_name": "Malloc1", 00:16:28.200 "name": "Malloc1", 00:16:28.200 "nguid": "819E3D8A57AA4631830839E4ECBB19AD", 00:16:28.200 "uuid": "819e3d8a-57aa-4631-8308-39e4ecbb19ad" 00:16:28.200 }, 00:16:28.200 { 00:16:28.200 "nsid": 2, 00:16:28.200 "bdev_name": "Malloc3", 00:16:28.200 "name": "Malloc3", 00:16:28.200 "nguid": "616ACC40F1D64FC7AEAF83EBAA0A1541", 00:16:28.200 "uuid": "616acc40-f1d6-4fc7-aeaf-83ebaa0a1541" 00:16:28.200 } 00:16:28.200 ] 00:16:28.200 }, 00:16:28.200 { 00:16:28.200 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:28.200 "subtype": "NVMe", 00:16:28.200 "listen_addresses": [ 00:16:28.200 { 00:16:28.200 "trtype": "VFIOUSER", 00:16:28.200 "adrfam": "IPv4", 00:16:28.200 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:28.200 "trsvcid": "0" 00:16:28.200 } 00:16:28.200 ], 00:16:28.200 "allow_any_host": true, 00:16:28.200 "hosts": [], 00:16:28.200 "serial_number": "SPDK2", 00:16:28.200 "model_number": "SPDK bdev Controller", 00:16:28.200 "max_namespaces": 32, 00:16:28.200 "min_cntlid": 1, 00:16:28.200 "max_cntlid": 65519, 00:16:28.200 "namespaces": [ 00:16:28.200 { 00:16:28.200 "nsid": 1, 00:16:28.200 "bdev_name": "Malloc2", 00:16:28.200 "name": "Malloc2", 00:16:28.200 "nguid": "0ABB982B12A042EBAC467A1EC21E3119", 00:16:28.200 "uuid": "0abb982b-12a0-42eb-ac46-7a1ec21e3119" 00:16:28.200 } 00:16:28.200 ] 00:16:28.200 } 00:16:28.200 ] 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=561907 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:28.200 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:28.461 [2024-11-20 15:26:17.193513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:28.461 Malloc4 00:16:28.461 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:28.461 [2024-11-20 15:26:17.371690] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:28.461 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:28.461 Asynchronous Event Request test 00:16:28.461 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:28.461 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:28.461 Registering asynchronous event callbacks... 00:16:28.461 Starting namespace attribute notice tests for all controllers... 00:16:28.461 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:28.461 aer_cb - Changed Namespace 00:16:28.461 Cleaning up... 00:16:28.722 [ 00:16:28.722 { 00:16:28.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:28.722 "subtype": "Discovery", 00:16:28.722 "listen_addresses": [], 00:16:28.722 "allow_any_host": true, 00:16:28.722 "hosts": [] 00:16:28.722 }, 00:16:28.722 { 00:16:28.722 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:28.722 "subtype": "NVMe", 00:16:28.722 "listen_addresses": [ 00:16:28.722 { 00:16:28.722 "trtype": "VFIOUSER", 00:16:28.722 "adrfam": "IPv4", 00:16:28.722 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:28.722 "trsvcid": "0" 00:16:28.722 } 00:16:28.722 ], 00:16:28.722 "allow_any_host": true, 00:16:28.722 "hosts": [], 00:16:28.722 "serial_number": "SPDK1", 00:16:28.722 "model_number": "SPDK bdev Controller", 00:16:28.722 "max_namespaces": 32, 00:16:28.722 "min_cntlid": 1, 00:16:28.722 "max_cntlid": 65519, 00:16:28.722 "namespaces": [ 00:16:28.722 { 00:16:28.722 "nsid": 1, 00:16:28.722 "bdev_name": "Malloc1", 00:16:28.722 "name": "Malloc1", 00:16:28.722 "nguid": "819E3D8A57AA4631830839E4ECBB19AD", 00:16:28.722 "uuid": "819e3d8a-57aa-4631-8308-39e4ecbb19ad" 00:16:28.722 }, 00:16:28.722 { 00:16:28.722 "nsid": 2, 00:16:28.722 "bdev_name": "Malloc3", 00:16:28.722 "name": "Malloc3", 00:16:28.722 "nguid": "616ACC40F1D64FC7AEAF83EBAA0A1541", 00:16:28.722 "uuid": "616acc40-f1d6-4fc7-aeaf-83ebaa0a1541" 00:16:28.722 } 00:16:28.722 ] 00:16:28.722 }, 00:16:28.722 { 00:16:28.722 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:28.722 "subtype": "NVMe", 00:16:28.722 "listen_addresses": [ 00:16:28.722 { 00:16:28.722 "trtype": "VFIOUSER", 00:16:28.722 "adrfam": "IPv4", 00:16:28.722 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:28.722 "trsvcid": "0" 00:16:28.722 } 00:16:28.722 ], 00:16:28.722 "allow_any_host": true, 00:16:28.722 "hosts": [], 00:16:28.722 "serial_number": "SPDK2", 00:16:28.722 "model_number": "SPDK bdev Controller", 00:16:28.722 "max_namespaces": 32, 00:16:28.722 "min_cntlid": 1, 00:16:28.722 "max_cntlid": 65519, 00:16:28.722 "namespaces": [ 00:16:28.722 { 00:16:28.722 "nsid": 1, 00:16:28.723 "bdev_name": "Malloc2", 00:16:28.723 "name": "Malloc2", 00:16:28.723 "nguid": "0ABB982B12A042EBAC467A1EC21E3119", 00:16:28.723 "uuid": "0abb982b-12a0-42eb-ac46-7a1ec21e3119" 00:16:28.723 }, 00:16:28.723 { 00:16:28.723 "nsid": 2, 00:16:28.723 "bdev_name": "Malloc4", 00:16:28.723 "name": "Malloc4", 00:16:28.723 "nguid": "C80EB65EC2FF46FA9E670ACFCF4BEA76", 00:16:28.723 "uuid": "c80eb65e-c2ff-46fa-9e67-0acfcf4bea76" 00:16:28.723 } 00:16:28.723 ] 00:16:28.723 } 00:16:28.723 ] 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 561907 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 552822 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 552822 ']' 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 552822 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552822 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552822' 00:16:28.723 killing process with pid 552822 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 552822 00:16:28.723 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 552822 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=561921 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 561921' 00:16:28.983 Process pid: 561921 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 561921 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 561921 ']' 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.983 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.984 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.984 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:28.984 [2024-11-20 15:26:17.851097] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:28.984 [2024-11-20 15:26:17.851803] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:16:28.984 [2024-11-20 15:26:17.851840] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.984 [2024-11-20 15:26:17.929584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.244 [2024-11-20 15:26:17.958823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.244 [2024-11-20 15:26:17.958848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.244 [2024-11-20 15:26:17.958854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.244 [2024-11-20 15:26:17.958859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.244 [2024-11-20 15:26:17.958863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.244 [2024-11-20 15:26:17.960048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.244 [2024-11-20 15:26:17.960192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.244 [2024-11-20 15:26:17.960254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.244 [2024-11-20 15:26:17.960257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.244 [2024-11-20 15:26:18.010773] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:29.244 [2024-11-20 15:26:18.011601] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:29.244 [2024-11-20 15:26:18.012554] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:29.244 [2024-11-20 15:26:18.013049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:29.244 [2024-11-20 15:26:18.013068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:29.817 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.817 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:29.817 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:30.773 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:31.034 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:31.034 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:31.034 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:31.034 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:31.034 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:31.296 Malloc1 00:16:31.296 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:31.558 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:31.558 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:31.819 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:31.819 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:31.819 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:32.082 Malloc2 00:16:32.082 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:32.343 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:32.343 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 561921 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 561921 ']' 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 561921 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 561921 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 561921' 00:16:32.604 killing process with pid 561921 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 561921 00:16:32.604 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 561921 00:16:32.865 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:32.865 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:32.865 00:16:32.865 real 0m51.028s 00:16:32.865 user 3m15.442s 00:16:32.865 sys 0m2.714s 00:16:32.865 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.865 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:32.865 ************************************ 00:16:32.865 END TEST nvmf_vfio_user 00:16:32.865 ************************************ 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.866 ************************************ 00:16:32.866 START TEST nvmf_vfio_user_nvme_compliance 00:16:32.866 ************************************ 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:32.866 * Looking for test storage... 00:16:32.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:32.866 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.128 --rc genhtml_branch_coverage=1 00:16:33.128 --rc genhtml_function_coverage=1 00:16:33.128 --rc genhtml_legend=1 00:16:33.128 --rc geninfo_all_blocks=1 00:16:33.128 --rc geninfo_unexecuted_blocks=1 00:16:33.128 00:16:33.128 ' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.128 --rc genhtml_branch_coverage=1 00:16:33.128 --rc genhtml_function_coverage=1 00:16:33.128 --rc genhtml_legend=1 00:16:33.128 --rc geninfo_all_blocks=1 00:16:33.128 --rc geninfo_unexecuted_blocks=1 00:16:33.128 00:16:33.128 ' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.128 --rc genhtml_branch_coverage=1 00:16:33.128 --rc genhtml_function_coverage=1 00:16:33.128 --rc genhtml_legend=1 00:16:33.128 --rc geninfo_all_blocks=1 00:16:33.128 --rc geninfo_unexecuted_blocks=1 00:16:33.128 00:16:33.128 ' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.128 --rc genhtml_branch_coverage=1 00:16:33.128 --rc genhtml_function_coverage=1 00:16:33.128 --rc genhtml_legend=1 00:16:33.128 --rc geninfo_all_blocks=1 00:16:33.128 --rc geninfo_unexecuted_blocks=1 00:16:33.128 00:16:33.128 ' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.128 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=562811 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 562811' 00:16:33.129 Process pid: 562811 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 562811 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 562811 ']' 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.129 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:33.129 [2024-11-20 15:26:21.998864] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:16:33.129 [2024-11-20 15:26:21.998946] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.390 [2024-11-20 15:26:22.089511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.390 [2024-11-20 15:26:22.123705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.390 [2024-11-20 15:26:22.123736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.390 [2024-11-20 15:26:22.123742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.390 [2024-11-20 15:26:22.123747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.390 [2024-11-20 15:26:22.123751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.390 [2024-11-20 15:26:22.125104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.390 [2024-11-20 15:26:22.125261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.390 [2024-11-20 15:26:22.125422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.961 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.961 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:33.961 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.904 malloc0 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.904 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:35.165 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.165 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:35.165 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.165 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:35.165 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.165 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:35.165 00:16:35.165 00:16:35.165 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.165 http://cunit.sourceforge.net/ 00:16:35.165 00:16:35.165 00:16:35.165 Suite: nvme_compliance 00:16:35.165 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 15:26:24.046546] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.165 [2024-11-20 15:26:24.047835] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:35.165 [2024-11-20 15:26:24.047846] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:35.165 [2024-11-20 15:26:24.047851] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:35.165 [2024-11-20 15:26:24.049560] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.165 passed 00:16:35.165 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 15:26:24.124034] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.425 [2024-11-20 15:26:24.127049] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.425 passed 00:16:35.425 Test: admin_identify_ns ...[2024-11-20 15:26:24.204508] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.425 [2024-11-20 15:26:24.264176] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:35.425 [2024-11-20 15:26:24.272166] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:35.425 [2024-11-20 15:26:24.293260] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.425 passed 00:16:35.425 Test: admin_get_features_mandatory_features ...[2024-11-20 15:26:24.368279] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.425 [2024-11-20 15:26:24.372304] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.687 passed 00:16:35.687 Test: admin_get_features_optional_features ...[2024-11-20 15:26:24.447750] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.687 [2024-11-20 15:26:24.450770] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.687 passed 00:16:35.687 Test: admin_set_features_number_of_queues ...[2024-11-20 15:26:24.526512] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.687 [2024-11-20 15:26:24.634256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.947 passed 00:16:35.947 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 15:26:24.707475] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.947 [2024-11-20 15:26:24.710488] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.947 passed 00:16:35.947 Test: admin_get_log_page_with_lpo ...[2024-11-20 15:26:24.784244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.947 [2024-11-20 15:26:24.854171] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:35.947 [2024-11-20 15:26:24.867216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.947 passed 00:16:36.208 Test: fabric_property_get ...[2024-11-20 15:26:24.940394] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.208 [2024-11-20 15:26:24.941597] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:36.208 [2024-11-20 15:26:24.943413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.208 passed 00:16:36.208 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 15:26:25.021889] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.208 [2024-11-20 15:26:25.023094] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:36.208 [2024-11-20 15:26:25.024916] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.208 passed 00:16:36.208 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 15:26:25.100516] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.469 [2024-11-20 15:26:25.184165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:36.469 [2024-11-20 15:26:25.200163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:36.469 [2024-11-20 15:26:25.205237] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.469 passed 00:16:36.469 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 15:26:25.280297] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.469 [2024-11-20 15:26:25.281495] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:36.469 [2024-11-20 15:26:25.283310] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.469 passed 00:16:36.469 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 15:26:25.361525] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.731 [2024-11-20 15:26:25.438168] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:36.731 [2024-11-20 15:26:25.462164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:36.731 [2024-11-20 15:26:25.467239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.731 passed 00:16:36.731 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 15:26:25.539447] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.731 [2024-11-20 15:26:25.540642] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:36.731 [2024-11-20 15:26:25.540663] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:36.731 [2024-11-20 15:26:25.542465] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.731 passed 00:16:36.731 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 15:26:25.618151] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.992 [2024-11-20 15:26:25.712166] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:36.992 [2024-11-20 15:26:25.720166] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:36.992 [2024-11-20 15:26:25.728166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:36.992 [2024-11-20 15:26:25.736167] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:36.992 [2024-11-20 15:26:25.765236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.992 passed 00:16:36.992 Test: admin_create_io_sq_verify_pc ...[2024-11-20 15:26:25.838435] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.992 [2024-11-20 15:26:25.855172] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:36.992 [2024-11-20 15:26:25.872599] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.992 passed 00:16:36.992 Test: admin_create_io_qp_max_qps ...[2024-11-20 15:26:25.949054] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.378 [2024-11-20 15:26:27.073169] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:38.638 [2024-11-20 15:26:27.463928] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.638 passed 00:16:38.638 Test: admin_create_io_sq_shared_cq ...[2024-11-20 15:26:27.539707] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.899 [2024-11-20 15:26:27.671164] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:38.899 [2024-11-20 15:26:27.708220] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.899 passed 00:16:38.899 00:16:38.899 Run Summary: Type Total Ran Passed Failed Inactive 00:16:38.899 suites 1 1 n/a 0 0 00:16:38.899 tests 18 18 18 0 0 00:16:38.899 asserts 360 360 360 0 n/a 00:16:38.899 00:16:38.899 Elapsed time = 1.507 seconds 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 562811 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 562811 ']' 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 562811 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 562811 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 562811' 00:16:38.899 killing process with pid 562811 00:16:38.899 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 562811 00:16:38.900 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 562811 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:39.161 00:16:39.161 real 0m6.227s 00:16:39.161 user 0m17.639s 00:16:39.161 sys 0m0.529s 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.161 ************************************ 00:16:39.161 END TEST nvmf_vfio_user_nvme_compliance 00:16:39.161 ************************************ 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.161 15:26:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.161 ************************************ 00:16:39.161 START TEST nvmf_vfio_user_fuzz 00:16:39.161 ************************************ 00:16:39.161 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:39.161 * Looking for test storage... 00:16:39.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.161 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:39.161 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:39.161 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:39.422 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:39.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.423 --rc genhtml_branch_coverage=1 00:16:39.423 --rc genhtml_function_coverage=1 00:16:39.423 --rc genhtml_legend=1 00:16:39.423 --rc geninfo_all_blocks=1 00:16:39.423 --rc geninfo_unexecuted_blocks=1 00:16:39.423 00:16:39.423 ' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:39.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.423 --rc genhtml_branch_coverage=1 00:16:39.423 --rc genhtml_function_coverage=1 00:16:39.423 --rc genhtml_legend=1 00:16:39.423 --rc geninfo_all_blocks=1 00:16:39.423 --rc geninfo_unexecuted_blocks=1 00:16:39.423 00:16:39.423 ' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:39.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.423 --rc genhtml_branch_coverage=1 00:16:39.423 --rc genhtml_function_coverage=1 00:16:39.423 --rc genhtml_legend=1 00:16:39.423 --rc geninfo_all_blocks=1 00:16:39.423 --rc geninfo_unexecuted_blocks=1 00:16:39.423 00:16:39.423 ' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:39.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.423 --rc genhtml_branch_coverage=1 00:16:39.423 --rc genhtml_function_coverage=1 00:16:39.423 --rc genhtml_legend=1 00:16:39.423 --rc geninfo_all_blocks=1 00:16:39.423 --rc geninfo_unexecuted_blocks=1 00:16:39.423 00:16:39.423 ' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:39.423 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=564082 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 564082' 00:16:39.424 Process pid: 564082 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 564082 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 564082 ']' 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.424 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.365 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.365 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:40.365 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:41.308 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:41.308 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.308 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:41.308 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.308 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:41.308 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:41.309 malloc0 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:41.309 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:13.427 Fuzzing completed. Shutting down the fuzz application 00:17:13.427 00:17:13.427 Dumping successful admin opcodes: 00:17:13.427 8, 9, 10, 24, 00:17:13.427 Dumping successful io opcodes: 00:17:13.427 0, 00:17:13.427 NS: 0x20000081ef00 I/O qp, Total commands completed: 1337406, total successful commands: 5242, random_seed: 297510336 00:17:13.427 NS: 0x20000081ef00 admin qp, Total commands completed: 313672, total successful commands: 2527, random_seed: 349080768 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 564082 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 564082 ']' 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 564082 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564082 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564082' 00:17:13.427 killing process with pid 564082 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 564082 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 564082 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:13.427 00:17:13.427 real 0m32.784s 00:17:13.427 user 0m35.268s 00:17:13.427 sys 0m26.653s 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:13.427 ************************************ 00:17:13.427 END TEST nvmf_vfio_user_fuzz 00:17:13.427 ************************************ 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.427 ************************************ 00:17:13.427 START TEST nvmf_auth_target 00:17:13.427 ************************************ 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:13.427 * Looking for test storage... 00:17:13.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.427 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.427 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.428 --rc genhtml_branch_coverage=1 00:17:13.428 --rc genhtml_function_coverage=1 00:17:13.428 --rc genhtml_legend=1 00:17:13.428 --rc geninfo_all_blocks=1 00:17:13.428 --rc geninfo_unexecuted_blocks=1 00:17:13.428 00:17:13.428 ' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.428 --rc genhtml_branch_coverage=1 00:17:13.428 --rc genhtml_function_coverage=1 00:17:13.428 --rc genhtml_legend=1 00:17:13.428 --rc geninfo_all_blocks=1 00:17:13.428 --rc geninfo_unexecuted_blocks=1 00:17:13.428 00:17:13.428 ' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.428 --rc genhtml_branch_coverage=1 00:17:13.428 --rc genhtml_function_coverage=1 00:17:13.428 --rc genhtml_legend=1 00:17:13.428 --rc geninfo_all_blocks=1 00:17:13.428 --rc geninfo_unexecuted_blocks=1 00:17:13.428 00:17:13.428 ' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.428 --rc genhtml_branch_coverage=1 00:17:13.428 --rc genhtml_function_coverage=1 00:17:13.428 --rc genhtml_legend=1 00:17:13.428 --rc geninfo_all_blocks=1 00:17:13.428 --rc geninfo_unexecuted_blocks=1 00:17:13.428 00:17:13.428 ' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.428 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.429 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.429 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.429 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:13.429 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:13.429 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:13.429 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.018 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:20.019 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:20.019 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:20.019 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:20.019 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:20.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:17:20.019 00:17:20.019 --- 10.0.0.2 ping statistics --- 00:17:20.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.019 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:17:20.019 00:17:20.019 --- 10.0.0.1 ping statistics --- 00:17:20.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.019 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=574176 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 574176 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 574176 ']' 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.019 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.020 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.020 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=574701 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e4383b61b53080ee2419b5482a420f3f21e43f191e64cc14 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vGA 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e4383b61b53080ee2419b5482a420f3f21e43f191e64cc14 0 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e4383b61b53080ee2419b5482a420f3f21e43f191e64cc14 0 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e4383b61b53080ee2419b5482a420f3f21e43f191e64cc14 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vGA 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vGA 00:17:20.593 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vGA 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c03abbb80a764439483e548852df3a014e69701bac29df5387ebf204738deb12 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4e8 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c03abbb80a764439483e548852df3a014e69701bac29df5387ebf204738deb12 3 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c03abbb80a764439483e548852df3a014e69701bac29df5387ebf204738deb12 3 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c03abbb80a764439483e548852df3a014e69701bac29df5387ebf204738deb12 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4e8 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4e8 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.4e8 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a8ba73de44228312393540024f21c682 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.D4f 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a8ba73de44228312393540024f21c682 1 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a8ba73de44228312393540024f21c682 1 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a8ba73de44228312393540024f21c682 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.D4f 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.D4f 00:17:20.854 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.D4f 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf11712a57978989a0d4d5172055b3d530c63f6f9e547cac 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YLS 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf11712a57978989a0d4d5172055b3d530c63f6f9e547cac 2 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf11712a57978989a0d4d5172055b3d530c63f6f9e547cac 2 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf11712a57978989a0d4d5172055b3d530c63f6f9e547cac 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YLS 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YLS 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.YLS 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=14f982b6aa774758e76e68cf2be9a85ec73338aa7149dd6a 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.j9m 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 14f982b6aa774758e76e68cf2be9a85ec73338aa7149dd6a 2 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 14f982b6aa774758e76e68cf2be9a85ec73338aa7149dd6a 2 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=14f982b6aa774758e76e68cf2be9a85ec73338aa7149dd6a 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:20.855 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.j9m 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.j9m 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.j9m 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c932a6845c1cc959ef0575203c2ca24d 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kSE 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c932a6845c1cc959ef0575203c2ca24d 1 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c932a6845c1cc959ef0575203c2ca24d 1 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c932a6845c1cc959ef0575203c2ca24d 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kSE 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kSE 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kSE 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a43c69de8d2e0f28c440331716c360d03abad10013c86afc3c3cd8f79e7f9f6a 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iie 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a43c69de8d2e0f28c440331716c360d03abad10013c86afc3c3cd8f79e7f9f6a 3 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a43c69de8d2e0f28c440331716c360d03abad10013c86afc3c3cd8f79e7f9f6a 3 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a43c69de8d2e0f28c440331716c360d03abad10013c86afc3c3cd8f79e7f9f6a 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iie 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iie 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.iie 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 574176 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 574176 ']' 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.115 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.116 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.116 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 574701 /var/tmp/host.sock 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 574701 ']' 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:21.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.377 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.639 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vGA 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vGA 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vGA 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.4e8 ]] 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4e8 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4e8 00:17:21.640 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4e8 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.D4f 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.D4f 00:17:21.901 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.D4f 00:17:22.162 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.YLS ]] 00:17:22.162 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YLS 00:17:22.162 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.162 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.162 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.162 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YLS 00:17:22.163 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YLS 00:17:22.423 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.j9m 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.j9m 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.j9m 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kSE ]] 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kSE 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kSE 00:17:22.424 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kSE 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iie 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.iie 00:17:22.685 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.iie 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.974 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.255 00:17:23.255 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.255 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.255 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.535 { 00:17:23.535 "cntlid": 1, 00:17:23.535 "qid": 0, 00:17:23.535 "state": "enabled", 00:17:23.535 "thread": "nvmf_tgt_poll_group_000", 00:17:23.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.535 "listen_address": { 00:17:23.535 "trtype": "TCP", 00:17:23.535 "adrfam": "IPv4", 00:17:23.535 "traddr": "10.0.0.2", 00:17:23.535 "trsvcid": "4420" 00:17:23.535 }, 00:17:23.535 "peer_address": { 00:17:23.535 "trtype": "TCP", 00:17:23.535 "adrfam": "IPv4", 00:17:23.535 "traddr": "10.0.0.1", 00:17:23.535 "trsvcid": "58890" 00:17:23.535 }, 00:17:23.535 "auth": { 00:17:23.535 "state": "completed", 00:17:23.535 "digest": "sha256", 00:17:23.535 "dhgroup": "null" 00:17:23.535 } 00:17:23.535 } 00:17:23.535 ]' 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.535 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.795 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:23.795 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:24.736 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.737 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.997 00:17:24.997 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.997 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.997 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.258 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.258 { 00:17:25.258 "cntlid": 3, 00:17:25.258 "qid": 0, 00:17:25.258 "state": "enabled", 00:17:25.258 "thread": "nvmf_tgt_poll_group_000", 00:17:25.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.258 "listen_address": { 00:17:25.258 "trtype": "TCP", 00:17:25.258 "adrfam": "IPv4", 00:17:25.258 "traddr": "10.0.0.2", 00:17:25.258 "trsvcid": "4420" 00:17:25.258 }, 00:17:25.258 "peer_address": { 00:17:25.258 "trtype": "TCP", 00:17:25.258 "adrfam": "IPv4", 00:17:25.258 "traddr": "10.0.0.1", 00:17:25.258 "trsvcid": "58926" 00:17:25.258 }, 00:17:25.258 "auth": { 00:17:25.258 "state": "completed", 00:17:25.258 "digest": "sha256", 00:17:25.258 "dhgroup": "null" 00:17:25.258 } 00:17:25.258 } 00:17:25.258 ]' 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.258 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.519 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:25.519 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:26.088 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.088 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.088 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.088 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.088 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.088 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.088 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.088 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.348 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.609 00:17:26.609 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.609 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.609 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.869 { 00:17:26.869 "cntlid": 5, 00:17:26.869 "qid": 0, 00:17:26.869 "state": "enabled", 00:17:26.869 "thread": "nvmf_tgt_poll_group_000", 00:17:26.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.869 "listen_address": { 00:17:26.869 "trtype": "TCP", 00:17:26.869 "adrfam": "IPv4", 00:17:26.869 "traddr": "10.0.0.2", 00:17:26.869 "trsvcid": "4420" 00:17:26.869 }, 00:17:26.869 "peer_address": { 00:17:26.869 "trtype": "TCP", 00:17:26.869 "adrfam": "IPv4", 00:17:26.869 "traddr": "10.0.0.1", 00:17:26.869 "trsvcid": "45046" 00:17:26.869 }, 00:17:26.869 "auth": { 00:17:26.869 "state": "completed", 00:17:26.869 "digest": "sha256", 00:17:26.869 "dhgroup": "null" 00:17:26.869 } 00:17:26.869 } 00:17:26.869 ]' 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.869 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.129 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:27.129 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:27.699 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.960 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.222 00:17:28.222 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.222 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.222 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.482 { 00:17:28.482 "cntlid": 7, 00:17:28.482 "qid": 0, 00:17:28.482 "state": "enabled", 00:17:28.482 "thread": "nvmf_tgt_poll_group_000", 00:17:28.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.482 "listen_address": { 00:17:28.482 "trtype": "TCP", 00:17:28.482 "adrfam": "IPv4", 00:17:28.482 "traddr": "10.0.0.2", 00:17:28.482 "trsvcid": "4420" 00:17:28.482 }, 00:17:28.482 "peer_address": { 00:17:28.482 "trtype": "TCP", 00:17:28.482 "adrfam": "IPv4", 00:17:28.482 "traddr": "10.0.0.1", 00:17:28.482 "trsvcid": "45090" 00:17:28.482 }, 00:17:28.482 "auth": { 00:17:28.482 "state": "completed", 00:17:28.482 "digest": "sha256", 00:17:28.482 "dhgroup": "null" 00:17:28.482 } 00:17:28.482 } 00:17:28.482 ]' 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.482 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.743 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:28.743 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.314 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.835 00:17:29.835 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.835 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.835 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.095 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.095 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.095 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.095 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.095 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.095 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.095 { 00:17:30.095 "cntlid": 9, 00:17:30.095 "qid": 0, 00:17:30.095 "state": "enabled", 00:17:30.095 "thread": "nvmf_tgt_poll_group_000", 00:17:30.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.096 "listen_address": { 00:17:30.096 "trtype": "TCP", 00:17:30.096 "adrfam": "IPv4", 00:17:30.096 "traddr": "10.0.0.2", 00:17:30.096 "trsvcid": "4420" 00:17:30.096 }, 00:17:30.096 "peer_address": { 00:17:30.096 "trtype": "TCP", 00:17:30.096 "adrfam": "IPv4", 00:17:30.096 "traddr": "10.0.0.1", 00:17:30.096 "trsvcid": "45122" 00:17:30.096 }, 00:17:30.096 "auth": { 00:17:30.096 "state": "completed", 00:17:30.096 "digest": "sha256", 00:17:30.096 "dhgroup": "ffdhe2048" 00:17:30.096 } 00:17:30.096 } 00:17:30.096 ]' 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.096 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.356 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:30.356 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.929 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.191 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.452 00:17:31.452 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.452 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.452 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.713 { 00:17:31.713 "cntlid": 11, 00:17:31.713 "qid": 0, 00:17:31.713 "state": "enabled", 00:17:31.713 "thread": "nvmf_tgt_poll_group_000", 00:17:31.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.713 "listen_address": { 00:17:31.713 "trtype": "TCP", 00:17:31.713 "adrfam": "IPv4", 00:17:31.713 "traddr": "10.0.0.2", 00:17:31.713 "trsvcid": "4420" 00:17:31.713 }, 00:17:31.713 "peer_address": { 00:17:31.713 "trtype": "TCP", 00:17:31.713 "adrfam": "IPv4", 00:17:31.713 "traddr": "10.0.0.1", 00:17:31.713 "trsvcid": "45150" 00:17:31.713 }, 00:17:31.713 "auth": { 00:17:31.713 "state": "completed", 00:17:31.713 "digest": "sha256", 00:17:31.713 "dhgroup": "ffdhe2048" 00:17:31.713 } 00:17:31.713 } 00:17:31.713 ]' 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.713 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.974 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:31.974 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.545 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.806 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.067 00:17:33.067 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.067 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.067 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.328 { 00:17:33.328 "cntlid": 13, 00:17:33.328 "qid": 0, 00:17:33.328 "state": "enabled", 00:17:33.328 "thread": "nvmf_tgt_poll_group_000", 00:17:33.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.328 "listen_address": { 00:17:33.328 "trtype": "TCP", 00:17:33.328 "adrfam": "IPv4", 00:17:33.328 "traddr": "10.0.0.2", 00:17:33.328 "trsvcid": "4420" 00:17:33.328 }, 00:17:33.328 "peer_address": { 00:17:33.328 "trtype": "TCP", 00:17:33.328 "adrfam": "IPv4", 00:17:33.328 "traddr": "10.0.0.1", 00:17:33.328 "trsvcid": "45176" 00:17:33.328 }, 00:17:33.328 "auth": { 00:17:33.328 "state": "completed", 00:17:33.328 "digest": "sha256", 00:17:33.328 "dhgroup": "ffdhe2048" 00:17:33.328 } 00:17:33.328 } 00:17:33.328 ]' 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.328 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.588 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:33.588 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.159 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.420 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.680 00:17:34.680 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.680 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.680 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.941 { 00:17:34.941 "cntlid": 15, 00:17:34.941 "qid": 0, 00:17:34.941 "state": "enabled", 00:17:34.941 "thread": "nvmf_tgt_poll_group_000", 00:17:34.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.941 "listen_address": { 00:17:34.941 "trtype": "TCP", 00:17:34.941 "adrfam": "IPv4", 00:17:34.941 "traddr": "10.0.0.2", 00:17:34.941 "trsvcid": "4420" 00:17:34.941 }, 00:17:34.941 "peer_address": { 00:17:34.941 "trtype": "TCP", 00:17:34.941 "adrfam": "IPv4", 00:17:34.941 "traddr": "10.0.0.1", 00:17:34.941 "trsvcid": "45200" 00:17:34.941 }, 00:17:34.941 "auth": { 00:17:34.941 "state": "completed", 00:17:34.941 "digest": "sha256", 00:17:34.941 "dhgroup": "ffdhe2048" 00:17:34.941 } 00:17:34.941 } 00:17:34.941 ]' 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.941 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.201 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:35.201 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.773 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.034 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.294 00:17:36.294 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.294 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.295 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.555 { 00:17:36.555 "cntlid": 17, 00:17:36.555 "qid": 0, 00:17:36.555 "state": "enabled", 00:17:36.555 "thread": "nvmf_tgt_poll_group_000", 00:17:36.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.555 "listen_address": { 00:17:36.555 "trtype": "TCP", 00:17:36.555 "adrfam": "IPv4", 00:17:36.555 "traddr": "10.0.0.2", 00:17:36.555 "trsvcid": "4420" 00:17:36.555 }, 00:17:36.555 "peer_address": { 00:17:36.555 "trtype": "TCP", 00:17:36.555 "adrfam": "IPv4", 00:17:36.555 "traddr": "10.0.0.1", 00:17:36.555 "trsvcid": "37036" 00:17:36.555 }, 00:17:36.555 "auth": { 00:17:36.555 "state": "completed", 00:17:36.555 "digest": "sha256", 00:17:36.555 "dhgroup": "ffdhe3072" 00:17:36.555 } 00:17:36.555 } 00:17:36.555 ]' 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.555 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.815 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:36.815 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.386 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.647 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.908 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.908 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.169 { 00:17:38.169 "cntlid": 19, 00:17:38.169 "qid": 0, 00:17:38.169 "state": "enabled", 00:17:38.169 "thread": "nvmf_tgt_poll_group_000", 00:17:38.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.169 "listen_address": { 00:17:38.169 "trtype": "TCP", 00:17:38.169 "adrfam": "IPv4", 00:17:38.169 "traddr": "10.0.0.2", 00:17:38.169 "trsvcid": "4420" 00:17:38.169 }, 00:17:38.169 "peer_address": { 00:17:38.169 "trtype": "TCP", 00:17:38.169 "adrfam": "IPv4", 00:17:38.169 "traddr": "10.0.0.1", 00:17:38.169 "trsvcid": "37074" 00:17:38.169 }, 00:17:38.169 "auth": { 00:17:38.169 "state": "completed", 00:17:38.169 "digest": "sha256", 00:17:38.169 "dhgroup": "ffdhe3072" 00:17:38.169 } 00:17:38.169 } 00:17:38.169 ]' 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.169 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.169 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.169 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.169 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.429 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:38.429 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:38.999 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.259 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.520 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.520 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.780 { 00:17:39.780 "cntlid": 21, 00:17:39.780 "qid": 0, 00:17:39.780 "state": "enabled", 00:17:39.780 "thread": "nvmf_tgt_poll_group_000", 00:17:39.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.780 "listen_address": { 00:17:39.780 "trtype": "TCP", 00:17:39.780 "adrfam": "IPv4", 00:17:39.780 "traddr": "10.0.0.2", 00:17:39.780 "trsvcid": "4420" 00:17:39.780 }, 00:17:39.780 "peer_address": { 00:17:39.780 "trtype": "TCP", 00:17:39.780 "adrfam": "IPv4", 00:17:39.780 "traddr": "10.0.0.1", 00:17:39.780 "trsvcid": "37112" 00:17:39.780 }, 00:17:39.780 "auth": { 00:17:39.780 "state": "completed", 00:17:39.780 "digest": "sha256", 00:17:39.780 "dhgroup": "ffdhe3072" 00:17:39.780 } 00:17:39.780 } 00:17:39.780 ]' 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.780 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.041 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:40.041 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:40.610 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.871 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.131 00:17:41.131 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.131 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.131 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.391 { 00:17:41.391 "cntlid": 23, 00:17:41.391 "qid": 0, 00:17:41.391 "state": "enabled", 00:17:41.391 "thread": "nvmf_tgt_poll_group_000", 00:17:41.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.391 "listen_address": { 00:17:41.391 "trtype": "TCP", 00:17:41.391 "adrfam": "IPv4", 00:17:41.391 "traddr": "10.0.0.2", 00:17:41.391 "trsvcid": "4420" 00:17:41.391 }, 00:17:41.391 "peer_address": { 00:17:41.391 "trtype": "TCP", 00:17:41.391 "adrfam": "IPv4", 00:17:41.391 "traddr": "10.0.0.1", 00:17:41.391 "trsvcid": "37134" 00:17:41.391 }, 00:17:41.391 "auth": { 00:17:41.391 "state": "completed", 00:17:41.391 "digest": "sha256", 00:17:41.391 "dhgroup": "ffdhe3072" 00:17:41.391 } 00:17:41.391 } 00:17:41.391 ]' 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.391 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.652 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:41.652 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.223 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.484 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.745 00:17:42.745 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.745 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.745 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.006 { 00:17:43.006 "cntlid": 25, 00:17:43.006 "qid": 0, 00:17:43.006 "state": "enabled", 00:17:43.006 "thread": "nvmf_tgt_poll_group_000", 00:17:43.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.006 "listen_address": { 00:17:43.006 "trtype": "TCP", 00:17:43.006 "adrfam": "IPv4", 00:17:43.006 "traddr": "10.0.0.2", 00:17:43.006 "trsvcid": "4420" 00:17:43.006 }, 00:17:43.006 "peer_address": { 00:17:43.006 "trtype": "TCP", 00:17:43.006 "adrfam": "IPv4", 00:17:43.006 "traddr": "10.0.0.1", 00:17:43.006 "trsvcid": "37156" 00:17:43.006 }, 00:17:43.006 "auth": { 00:17:43.006 "state": "completed", 00:17:43.006 "digest": "sha256", 00:17:43.006 "dhgroup": "ffdhe4096" 00:17:43.006 } 00:17:43.006 } 00:17:43.006 ]' 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.006 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.266 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:43.267 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.836 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.097 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.357 00:17:44.357 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.357 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.357 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.618 { 00:17:44.618 "cntlid": 27, 00:17:44.618 "qid": 0, 00:17:44.618 "state": "enabled", 00:17:44.618 "thread": "nvmf_tgt_poll_group_000", 00:17:44.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.618 "listen_address": { 00:17:44.618 "trtype": "TCP", 00:17:44.618 "adrfam": "IPv4", 00:17:44.618 "traddr": "10.0.0.2", 00:17:44.618 "trsvcid": "4420" 00:17:44.618 }, 00:17:44.618 "peer_address": { 00:17:44.618 "trtype": "TCP", 00:17:44.618 "adrfam": "IPv4", 00:17:44.618 "traddr": "10.0.0.1", 00:17:44.618 "trsvcid": "37180" 00:17:44.618 }, 00:17:44.618 "auth": { 00:17:44.618 "state": "completed", 00:17:44.618 "digest": "sha256", 00:17:44.618 "dhgroup": "ffdhe4096" 00:17:44.618 } 00:17:44.618 } 00:17:44.618 ]' 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.618 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.879 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:44.879 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.449 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.710 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.970 00:17:45.970 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.970 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.970 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.231 { 00:17:46.231 "cntlid": 29, 00:17:46.231 "qid": 0, 00:17:46.231 "state": "enabled", 00:17:46.231 "thread": "nvmf_tgt_poll_group_000", 00:17:46.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.231 "listen_address": { 00:17:46.231 "trtype": "TCP", 00:17:46.231 "adrfam": "IPv4", 00:17:46.231 "traddr": "10.0.0.2", 00:17:46.231 "trsvcid": "4420" 00:17:46.231 }, 00:17:46.231 "peer_address": { 00:17:46.231 "trtype": "TCP", 00:17:46.231 "adrfam": "IPv4", 00:17:46.231 "traddr": "10.0.0.1", 00:17:46.231 "trsvcid": "47410" 00:17:46.231 }, 00:17:46.231 "auth": { 00:17:46.231 "state": "completed", 00:17:46.231 "digest": "sha256", 00:17:46.231 "dhgroup": "ffdhe4096" 00:17:46.231 } 00:17:46.231 } 00:17:46.231 ]' 00:17:46.231 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.231 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.491 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:46.491 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.061 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.321 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.583 00:17:47.583 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.583 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.583 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.844 { 00:17:47.844 "cntlid": 31, 00:17:47.844 "qid": 0, 00:17:47.844 "state": "enabled", 00:17:47.844 "thread": "nvmf_tgt_poll_group_000", 00:17:47.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.844 "listen_address": { 00:17:47.844 "trtype": "TCP", 00:17:47.844 "adrfam": "IPv4", 00:17:47.844 "traddr": "10.0.0.2", 00:17:47.844 "trsvcid": "4420" 00:17:47.844 }, 00:17:47.844 "peer_address": { 00:17:47.844 "trtype": "TCP", 00:17:47.844 "adrfam": "IPv4", 00:17:47.844 "traddr": "10.0.0.1", 00:17:47.844 "trsvcid": "47446" 00:17:47.844 }, 00:17:47.844 "auth": { 00:17:47.844 "state": "completed", 00:17:47.844 "digest": "sha256", 00:17:47.844 "dhgroup": "ffdhe4096" 00:17:47.844 } 00:17:47.844 } 00:17:47.844 ]' 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.844 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.104 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:48.104 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.673 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.932 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.194 00:17:49.194 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.194 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.194 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.456 { 00:17:49.456 "cntlid": 33, 00:17:49.456 "qid": 0, 00:17:49.456 "state": "enabled", 00:17:49.456 "thread": "nvmf_tgt_poll_group_000", 00:17:49.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.456 "listen_address": { 00:17:49.456 "trtype": "TCP", 00:17:49.456 "adrfam": "IPv4", 00:17:49.456 "traddr": "10.0.0.2", 00:17:49.456 "trsvcid": "4420" 00:17:49.456 }, 00:17:49.456 "peer_address": { 00:17:49.456 "trtype": "TCP", 00:17:49.456 "adrfam": "IPv4", 00:17:49.456 "traddr": "10.0.0.1", 00:17:49.456 "trsvcid": "47458" 00:17:49.456 }, 00:17:49.456 "auth": { 00:17:49.456 "state": "completed", 00:17:49.456 "digest": "sha256", 00:17:49.456 "dhgroup": "ffdhe6144" 00:17:49.456 } 00:17:49.456 } 00:17:49.456 ]' 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.456 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.718 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.718 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.718 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.718 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:49.718 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:50.290 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.552 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.123 00:17:51.123 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.123 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.123 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.123 { 00:17:51.123 "cntlid": 35, 00:17:51.123 "qid": 0, 00:17:51.123 "state": "enabled", 00:17:51.123 "thread": "nvmf_tgt_poll_group_000", 00:17:51.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.123 "listen_address": { 00:17:51.123 "trtype": "TCP", 00:17:51.123 "adrfam": "IPv4", 00:17:51.123 "traddr": "10.0.0.2", 00:17:51.123 "trsvcid": "4420" 00:17:51.123 }, 00:17:51.123 "peer_address": { 00:17:51.123 "trtype": "TCP", 00:17:51.123 "adrfam": "IPv4", 00:17:51.123 "traddr": "10.0.0.1", 00:17:51.123 "trsvcid": "47490" 00:17:51.123 }, 00:17:51.123 "auth": { 00:17:51.123 "state": "completed", 00:17:51.123 "digest": "sha256", 00:17:51.123 "dhgroup": "ffdhe6144" 00:17:51.123 } 00:17:51.123 } 00:17:51.123 ]' 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.123 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.384 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.384 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.384 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.384 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.384 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.645 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:51.645 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:52.217 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.217 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.477 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.738 00:17:52.738 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.738 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.738 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.999 { 00:17:52.999 "cntlid": 37, 00:17:52.999 "qid": 0, 00:17:52.999 "state": "enabled", 00:17:52.999 "thread": "nvmf_tgt_poll_group_000", 00:17:52.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.999 "listen_address": { 00:17:52.999 "trtype": "TCP", 00:17:52.999 "adrfam": "IPv4", 00:17:52.999 "traddr": "10.0.0.2", 00:17:52.999 "trsvcid": "4420" 00:17:52.999 }, 00:17:52.999 "peer_address": { 00:17:52.999 "trtype": "TCP", 00:17:52.999 "adrfam": "IPv4", 00:17:52.999 "traddr": "10.0.0.1", 00:17:52.999 "trsvcid": "47518" 00:17:52.999 }, 00:17:52.999 "auth": { 00:17:52.999 "state": "completed", 00:17:52.999 "digest": "sha256", 00:17:52.999 "dhgroup": "ffdhe6144" 00:17:52.999 } 00:17:52.999 } 00:17:52.999 ]' 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.999 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.294 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:53.294 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.864 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.125 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.385 00:17:54.385 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.385 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.385 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.645 { 00:17:54.645 "cntlid": 39, 00:17:54.645 "qid": 0, 00:17:54.645 "state": "enabled", 00:17:54.645 "thread": "nvmf_tgt_poll_group_000", 00:17:54.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.645 "listen_address": { 00:17:54.645 "trtype": "TCP", 00:17:54.645 "adrfam": "IPv4", 00:17:54.645 "traddr": "10.0.0.2", 00:17:54.645 "trsvcid": "4420" 00:17:54.645 }, 00:17:54.645 "peer_address": { 00:17:54.645 "trtype": "TCP", 00:17:54.645 "adrfam": "IPv4", 00:17:54.645 "traddr": "10.0.0.1", 00:17:54.645 "trsvcid": "47550" 00:17:54.645 }, 00:17:54.645 "auth": { 00:17:54.645 "state": "completed", 00:17:54.645 "digest": "sha256", 00:17:54.645 "dhgroup": "ffdhe6144" 00:17:54.645 } 00:17:54.645 } 00:17:54.645 ]' 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.645 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.905 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.905 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.905 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.905 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:54.905 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:17:55.474 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.735 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.305 00:17:56.305 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.305 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.305 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.566 { 00:17:56.566 "cntlid": 41, 00:17:56.566 "qid": 0, 00:17:56.566 "state": "enabled", 00:17:56.566 "thread": "nvmf_tgt_poll_group_000", 00:17:56.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.566 "listen_address": { 00:17:56.566 "trtype": "TCP", 00:17:56.566 "adrfam": "IPv4", 00:17:56.566 "traddr": "10.0.0.2", 00:17:56.566 "trsvcid": "4420" 00:17:56.566 }, 00:17:56.566 "peer_address": { 00:17:56.566 "trtype": "TCP", 00:17:56.566 "adrfam": "IPv4", 00:17:56.566 "traddr": "10.0.0.1", 00:17:56.566 "trsvcid": "48854" 00:17:56.566 }, 00:17:56.566 "auth": { 00:17:56.566 "state": "completed", 00:17:56.566 "digest": "sha256", 00:17:56.566 "dhgroup": "ffdhe8192" 00:17:56.566 } 00:17:56.566 } 00:17:56.566 ]' 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.566 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.826 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:56.826 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.396 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.658 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.230 00:17:58.230 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.230 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.230 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.230 { 00:17:58.230 "cntlid": 43, 00:17:58.230 "qid": 0, 00:17:58.230 "state": "enabled", 00:17:58.230 "thread": "nvmf_tgt_poll_group_000", 00:17:58.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.230 "listen_address": { 00:17:58.230 "trtype": "TCP", 00:17:58.230 "adrfam": "IPv4", 00:17:58.230 "traddr": "10.0.0.2", 00:17:58.230 "trsvcid": "4420" 00:17:58.230 }, 00:17:58.230 "peer_address": { 00:17:58.230 "trtype": "TCP", 00:17:58.230 "adrfam": "IPv4", 00:17:58.230 "traddr": "10.0.0.1", 00:17:58.230 "trsvcid": "48890" 00:17:58.230 }, 00:17:58.230 "auth": { 00:17:58.230 "state": "completed", 00:17:58.230 "digest": "sha256", 00:17:58.230 "dhgroup": "ffdhe8192" 00:17:58.230 } 00:17:58.230 } 00:17:58.230 ]' 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.230 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:58.491 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.430 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.092 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.092 { 00:18:00.092 "cntlid": 45, 00:18:00.092 "qid": 0, 00:18:00.092 "state": "enabled", 00:18:00.092 "thread": "nvmf_tgt_poll_group_000", 00:18:00.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.092 "listen_address": { 00:18:00.092 "trtype": "TCP", 00:18:00.092 "adrfam": "IPv4", 00:18:00.092 "traddr": "10.0.0.2", 00:18:00.092 "trsvcid": "4420" 00:18:00.092 }, 00:18:00.092 "peer_address": { 00:18:00.092 "trtype": "TCP", 00:18:00.092 "adrfam": "IPv4", 00:18:00.092 "traddr": "10.0.0.1", 00:18:00.092 "trsvcid": "48910" 00:18:00.092 }, 00:18:00.092 "auth": { 00:18:00.092 "state": "completed", 00:18:00.092 "digest": "sha256", 00:18:00.092 "dhgroup": "ffdhe8192" 00:18:00.092 } 00:18:00.092 } 00:18:00.092 ]' 00:18:00.092 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.092 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.092 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:00.436 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:01.011 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.011 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.011 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.011 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.271 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.271 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.271 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.271 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.271 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.843 00:18:01.843 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.843 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.843 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.104 { 00:18:02.104 "cntlid": 47, 00:18:02.104 "qid": 0, 00:18:02.104 "state": "enabled", 00:18:02.104 "thread": "nvmf_tgt_poll_group_000", 00:18:02.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.104 "listen_address": { 00:18:02.104 "trtype": "TCP", 00:18:02.104 "adrfam": "IPv4", 00:18:02.104 "traddr": "10.0.0.2", 00:18:02.104 "trsvcid": "4420" 00:18:02.104 }, 00:18:02.104 "peer_address": { 00:18:02.104 "trtype": "TCP", 00:18:02.104 "adrfam": "IPv4", 00:18:02.104 "traddr": "10.0.0.1", 00:18:02.104 "trsvcid": "48930" 00:18:02.104 }, 00:18:02.104 "auth": { 00:18:02.104 "state": "completed", 00:18:02.104 "digest": "sha256", 00:18:02.104 "dhgroup": "ffdhe8192" 00:18:02.104 } 00:18:02.104 } 00:18:02.104 ]' 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.104 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.364 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:02.364 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.934 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.195 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.195 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.195 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.195 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.195 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.456 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.456 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.716 { 00:18:03.716 "cntlid": 49, 00:18:03.716 "qid": 0, 00:18:03.716 "state": "enabled", 00:18:03.716 "thread": "nvmf_tgt_poll_group_000", 00:18:03.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.716 "listen_address": { 00:18:03.716 "trtype": "TCP", 00:18:03.716 "adrfam": "IPv4", 00:18:03.716 "traddr": "10.0.0.2", 00:18:03.716 "trsvcid": "4420" 00:18:03.716 }, 00:18:03.716 "peer_address": { 00:18:03.716 "trtype": "TCP", 00:18:03.716 "adrfam": "IPv4", 00:18:03.716 "traddr": "10.0.0.1", 00:18:03.716 "trsvcid": "48952" 00:18:03.716 }, 00:18:03.716 "auth": { 00:18:03.716 "state": "completed", 00:18:03.716 "digest": "sha384", 00:18:03.716 "dhgroup": "null" 00:18:03.716 } 00:18:03.716 } 00:18:03.716 ]' 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.716 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.976 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:03.976 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:04.546 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.806 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.068 00:18:05.068 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.068 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.068 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.328 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.328 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.328 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.328 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.328 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.329 { 00:18:05.329 "cntlid": 51, 00:18:05.329 "qid": 0, 00:18:05.329 "state": "enabled", 00:18:05.329 "thread": "nvmf_tgt_poll_group_000", 00:18:05.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.329 "listen_address": { 00:18:05.329 "trtype": "TCP", 00:18:05.329 "adrfam": "IPv4", 00:18:05.329 "traddr": "10.0.0.2", 00:18:05.329 "trsvcid": "4420" 00:18:05.329 }, 00:18:05.329 "peer_address": { 00:18:05.329 "trtype": "TCP", 00:18:05.329 "adrfam": "IPv4", 00:18:05.329 "traddr": "10.0.0.1", 00:18:05.329 "trsvcid": "48974" 00:18:05.329 }, 00:18:05.329 "auth": { 00:18:05.329 "state": "completed", 00:18:05.329 "digest": "sha384", 00:18:05.329 "dhgroup": "null" 00:18:05.329 } 00:18:05.329 } 00:18:05.329 ]' 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.329 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.590 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:05.590 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.162 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.421 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.681 00:18:06.681 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.681 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.681 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.942 { 00:18:06.942 "cntlid": 53, 00:18:06.942 "qid": 0, 00:18:06.942 "state": "enabled", 00:18:06.942 "thread": "nvmf_tgt_poll_group_000", 00:18:06.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.942 "listen_address": { 00:18:06.942 "trtype": "TCP", 00:18:06.942 "adrfam": "IPv4", 00:18:06.942 "traddr": "10.0.0.2", 00:18:06.942 "trsvcid": "4420" 00:18:06.942 }, 00:18:06.942 "peer_address": { 00:18:06.942 "trtype": "TCP", 00:18:06.942 "adrfam": "IPv4", 00:18:06.942 "traddr": "10.0.0.1", 00:18:06.942 "trsvcid": "56644" 00:18:06.942 }, 00:18:06.942 "auth": { 00:18:06.942 "state": "completed", 00:18:06.942 "digest": "sha384", 00:18:06.942 "dhgroup": "null" 00:18:06.942 } 00:18:06.942 } 00:18:06.942 ]' 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.942 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.203 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:07.203 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:07.773 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.068 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.069 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.330 00:18:08.330 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.330 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.330 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.589 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.589 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.589 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.589 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.589 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.589 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.589 { 00:18:08.589 "cntlid": 55, 00:18:08.589 "qid": 0, 00:18:08.589 "state": "enabled", 00:18:08.589 "thread": "nvmf_tgt_poll_group_000", 00:18:08.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.590 "listen_address": { 00:18:08.590 "trtype": "TCP", 00:18:08.590 "adrfam": "IPv4", 00:18:08.590 "traddr": "10.0.0.2", 00:18:08.590 "trsvcid": "4420" 00:18:08.590 }, 00:18:08.590 "peer_address": { 00:18:08.590 "trtype": "TCP", 00:18:08.590 "adrfam": "IPv4", 00:18:08.590 "traddr": "10.0.0.1", 00:18:08.590 "trsvcid": "56668" 00:18:08.590 }, 00:18:08.590 "auth": { 00:18:08.590 "state": "completed", 00:18:08.590 "digest": "sha384", 00:18:08.590 "dhgroup": "null" 00:18:08.590 } 00:18:08.590 } 00:18:08.590 ]' 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.590 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.849 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:08.850 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:09.420 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.420 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.420 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.420 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.420 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.420 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.421 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.421 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.421 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.681 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.943 00:18:09.943 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.943 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.943 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.203 { 00:18:10.203 "cntlid": 57, 00:18:10.203 "qid": 0, 00:18:10.203 "state": "enabled", 00:18:10.203 "thread": "nvmf_tgt_poll_group_000", 00:18:10.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.203 "listen_address": { 00:18:10.203 "trtype": "TCP", 00:18:10.203 "adrfam": "IPv4", 00:18:10.203 "traddr": "10.0.0.2", 00:18:10.203 "trsvcid": "4420" 00:18:10.203 }, 00:18:10.203 "peer_address": { 00:18:10.203 "trtype": "TCP", 00:18:10.203 "adrfam": "IPv4", 00:18:10.203 "traddr": "10.0.0.1", 00:18:10.203 "trsvcid": "56682" 00:18:10.203 }, 00:18:10.203 "auth": { 00:18:10.203 "state": "completed", 00:18:10.203 "digest": "sha384", 00:18:10.203 "dhgroup": "ffdhe2048" 00:18:10.203 } 00:18:10.203 } 00:18:10.203 ]' 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.203 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.203 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.203 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.203 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.203 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.203 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.462 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:10.462 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.031 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.291 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.292 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.552 00:18:11.552 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.552 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.552 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.812 { 00:18:11.812 "cntlid": 59, 00:18:11.812 "qid": 0, 00:18:11.812 "state": "enabled", 00:18:11.812 "thread": "nvmf_tgt_poll_group_000", 00:18:11.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.812 "listen_address": { 00:18:11.812 "trtype": "TCP", 00:18:11.812 "adrfam": "IPv4", 00:18:11.812 "traddr": "10.0.0.2", 00:18:11.812 "trsvcid": "4420" 00:18:11.812 }, 00:18:11.812 "peer_address": { 00:18:11.812 "trtype": "TCP", 00:18:11.812 "adrfam": "IPv4", 00:18:11.812 "traddr": "10.0.0.1", 00:18:11.812 "trsvcid": "56710" 00:18:11.812 }, 00:18:11.812 "auth": { 00:18:11.812 "state": "completed", 00:18:11.812 "digest": "sha384", 00:18:11.812 "dhgroup": "ffdhe2048" 00:18:11.812 } 00:18:11.812 } 00:18:11.812 ]' 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.812 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.813 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.813 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.813 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.813 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.813 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.813 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.074 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:12.074 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.644 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.904 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.165 00:18:13.165 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.165 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.165 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.426 { 00:18:13.426 "cntlid": 61, 00:18:13.426 "qid": 0, 00:18:13.426 "state": "enabled", 00:18:13.426 "thread": "nvmf_tgt_poll_group_000", 00:18:13.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.426 "listen_address": { 00:18:13.426 "trtype": "TCP", 00:18:13.426 "adrfam": "IPv4", 00:18:13.426 "traddr": "10.0.0.2", 00:18:13.426 "trsvcid": "4420" 00:18:13.426 }, 00:18:13.426 "peer_address": { 00:18:13.426 "trtype": "TCP", 00:18:13.426 "adrfam": "IPv4", 00:18:13.426 "traddr": "10.0.0.1", 00:18:13.426 "trsvcid": "56720" 00:18:13.426 }, 00:18:13.426 "auth": { 00:18:13.426 "state": "completed", 00:18:13.426 "digest": "sha384", 00:18:13.426 "dhgroup": "ffdhe2048" 00:18:13.426 } 00:18:13.426 } 00:18:13.426 ]' 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.426 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.688 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:13.688 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.260 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.521 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.782 00:18:14.782 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.782 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.782 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.042 { 00:18:15.042 "cntlid": 63, 00:18:15.042 "qid": 0, 00:18:15.042 "state": "enabled", 00:18:15.042 "thread": "nvmf_tgt_poll_group_000", 00:18:15.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.042 "listen_address": { 00:18:15.042 "trtype": "TCP", 00:18:15.042 "adrfam": "IPv4", 00:18:15.042 "traddr": "10.0.0.2", 00:18:15.042 "trsvcid": "4420" 00:18:15.042 }, 00:18:15.042 "peer_address": { 00:18:15.042 "trtype": "TCP", 00:18:15.042 "adrfam": "IPv4", 00:18:15.042 "traddr": "10.0.0.1", 00:18:15.042 "trsvcid": "56752" 00:18:15.042 }, 00:18:15.042 "auth": { 00:18:15.042 "state": "completed", 00:18:15.042 "digest": "sha384", 00:18:15.042 "dhgroup": "ffdhe2048" 00:18:15.042 } 00:18:15.042 } 00:18:15.042 ]' 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.042 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.043 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.043 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.043 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.043 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.043 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.043 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.302 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:15.303 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.873 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.134 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.395 00:18:16.395 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.395 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.395 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.656 { 00:18:16.656 "cntlid": 65, 00:18:16.656 "qid": 0, 00:18:16.656 "state": "enabled", 00:18:16.656 "thread": "nvmf_tgt_poll_group_000", 00:18:16.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.656 "listen_address": { 00:18:16.656 "trtype": "TCP", 00:18:16.656 "adrfam": "IPv4", 00:18:16.656 "traddr": "10.0.0.2", 00:18:16.656 "trsvcid": "4420" 00:18:16.656 }, 00:18:16.656 "peer_address": { 00:18:16.656 "trtype": "TCP", 00:18:16.656 "adrfam": "IPv4", 00:18:16.656 "traddr": "10.0.0.1", 00:18:16.656 "trsvcid": "43560" 00:18:16.656 }, 00:18:16.656 "auth": { 00:18:16.656 "state": "completed", 00:18:16.656 "digest": "sha384", 00:18:16.656 "dhgroup": "ffdhe3072" 00:18:16.656 } 00:18:16.656 } 00:18:16.656 ]' 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.656 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.917 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:16.917 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:17.494 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.755 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.015 00:18:18.015 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.015 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.015 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.276 { 00:18:18.276 "cntlid": 67, 00:18:18.276 "qid": 0, 00:18:18.276 "state": "enabled", 00:18:18.276 "thread": "nvmf_tgt_poll_group_000", 00:18:18.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.276 "listen_address": { 00:18:18.276 "trtype": "TCP", 00:18:18.276 "adrfam": "IPv4", 00:18:18.276 "traddr": "10.0.0.2", 00:18:18.276 "trsvcid": "4420" 00:18:18.276 }, 00:18:18.276 "peer_address": { 00:18:18.276 "trtype": "TCP", 00:18:18.276 "adrfam": "IPv4", 00:18:18.276 "traddr": "10.0.0.1", 00:18:18.276 "trsvcid": "43594" 00:18:18.276 }, 00:18:18.276 "auth": { 00:18:18.276 "state": "completed", 00:18:18.276 "digest": "sha384", 00:18:18.276 "dhgroup": "ffdhe3072" 00:18:18.276 } 00:18:18.276 } 00:18:18.276 ]' 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.276 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.537 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:18.537 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:19.107 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.369 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.629 00:18:19.629 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.629 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.629 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.890 { 00:18:19.890 "cntlid": 69, 00:18:19.890 "qid": 0, 00:18:19.890 "state": "enabled", 00:18:19.890 "thread": "nvmf_tgt_poll_group_000", 00:18:19.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.890 "listen_address": { 00:18:19.890 "trtype": "TCP", 00:18:19.890 "adrfam": "IPv4", 00:18:19.890 "traddr": "10.0.0.2", 00:18:19.890 "trsvcid": "4420" 00:18:19.890 }, 00:18:19.890 "peer_address": { 00:18:19.890 "trtype": "TCP", 00:18:19.890 "adrfam": "IPv4", 00:18:19.890 "traddr": "10.0.0.1", 00:18:19.890 "trsvcid": "43638" 00:18:19.890 }, 00:18:19.890 "auth": { 00:18:19.890 "state": "completed", 00:18:19.890 "digest": "sha384", 00:18:19.890 "dhgroup": "ffdhe3072" 00:18:19.890 } 00:18:19.890 } 00:18:19.890 ]' 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.890 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.150 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:20.150 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:20.720 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.980 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.981 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.241 00:18:21.241 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.241 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.241 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.500 { 00:18:21.500 "cntlid": 71, 00:18:21.500 "qid": 0, 00:18:21.500 "state": "enabled", 00:18:21.500 "thread": "nvmf_tgt_poll_group_000", 00:18:21.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.500 "listen_address": { 00:18:21.500 "trtype": "TCP", 00:18:21.500 "adrfam": "IPv4", 00:18:21.500 "traddr": "10.0.0.2", 00:18:21.500 "trsvcid": "4420" 00:18:21.500 }, 00:18:21.500 "peer_address": { 00:18:21.500 "trtype": "TCP", 00:18:21.500 "adrfam": "IPv4", 00:18:21.500 "traddr": "10.0.0.1", 00:18:21.500 "trsvcid": "43662" 00:18:21.500 }, 00:18:21.500 "auth": { 00:18:21.500 "state": "completed", 00:18:21.500 "digest": "sha384", 00:18:21.500 "dhgroup": "ffdhe3072" 00:18:21.500 } 00:18:21.500 } 00:18:21.500 ]' 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.500 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.501 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.501 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.761 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.761 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.761 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.761 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:21.761 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:22.330 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.330 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.330 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.330 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.591 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.851 00:18:22.851 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.851 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.851 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.111 { 00:18:23.111 "cntlid": 73, 00:18:23.111 "qid": 0, 00:18:23.111 "state": "enabled", 00:18:23.111 "thread": "nvmf_tgt_poll_group_000", 00:18:23.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.111 "listen_address": { 00:18:23.111 "trtype": "TCP", 00:18:23.111 "adrfam": "IPv4", 00:18:23.111 "traddr": "10.0.0.2", 00:18:23.111 "trsvcid": "4420" 00:18:23.111 }, 00:18:23.111 "peer_address": { 00:18:23.111 "trtype": "TCP", 00:18:23.111 "adrfam": "IPv4", 00:18:23.111 "traddr": "10.0.0.1", 00:18:23.111 "trsvcid": "43684" 00:18:23.111 }, 00:18:23.111 "auth": { 00:18:23.111 "state": "completed", 00:18:23.111 "digest": "sha384", 00:18:23.111 "dhgroup": "ffdhe4096" 00:18:23.111 } 00:18:23.111 } 00:18:23.111 ]' 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.111 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.111 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.112 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.112 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.112 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.112 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.372 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:23.372 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:24.313 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.313 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.573 00:18:24.573 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.573 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.573 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.862 { 00:18:24.862 "cntlid": 75, 00:18:24.862 "qid": 0, 00:18:24.862 "state": "enabled", 00:18:24.862 "thread": "nvmf_tgt_poll_group_000", 00:18:24.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.862 "listen_address": { 00:18:24.862 "trtype": "TCP", 00:18:24.862 "adrfam": "IPv4", 00:18:24.862 "traddr": "10.0.0.2", 00:18:24.862 "trsvcid": "4420" 00:18:24.862 }, 00:18:24.862 "peer_address": { 00:18:24.862 "trtype": "TCP", 00:18:24.862 "adrfam": "IPv4", 00:18:24.862 "traddr": "10.0.0.1", 00:18:24.862 "trsvcid": "43714" 00:18:24.862 }, 00:18:24.862 "auth": { 00:18:24.862 "state": "completed", 00:18:24.862 "digest": "sha384", 00:18:24.862 "dhgroup": "ffdhe4096" 00:18:24.862 } 00:18:24.862 } 00:18:24.862 ]' 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.862 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.150 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:25.150 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.721 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.983 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.243 00:18:26.243 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.243 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.243 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.503 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.503 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.503 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.503 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.503 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.503 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.503 { 00:18:26.504 "cntlid": 77, 00:18:26.504 "qid": 0, 00:18:26.504 "state": "enabled", 00:18:26.504 "thread": "nvmf_tgt_poll_group_000", 00:18:26.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.504 "listen_address": { 00:18:26.504 "trtype": "TCP", 00:18:26.504 "adrfam": "IPv4", 00:18:26.504 "traddr": "10.0.0.2", 00:18:26.504 "trsvcid": "4420" 00:18:26.504 }, 00:18:26.504 "peer_address": { 00:18:26.504 "trtype": "TCP", 00:18:26.504 "adrfam": "IPv4", 00:18:26.504 "traddr": "10.0.0.1", 00:18:26.504 "trsvcid": "49134" 00:18:26.504 }, 00:18:26.504 "auth": { 00:18:26.504 "state": "completed", 00:18:26.504 "digest": "sha384", 00:18:26.504 "dhgroup": "ffdhe4096" 00:18:26.504 } 00:18:26.504 } 00:18:26.504 ]' 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.504 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.764 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:26.764 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:27.335 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.596 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.857 00:18:27.857 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.857 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.857 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.118 { 00:18:28.118 "cntlid": 79, 00:18:28.118 "qid": 0, 00:18:28.118 "state": "enabled", 00:18:28.118 "thread": "nvmf_tgt_poll_group_000", 00:18:28.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.118 "listen_address": { 00:18:28.118 "trtype": "TCP", 00:18:28.118 "adrfam": "IPv4", 00:18:28.118 "traddr": "10.0.0.2", 00:18:28.118 "trsvcid": "4420" 00:18:28.118 }, 00:18:28.118 "peer_address": { 00:18:28.118 "trtype": "TCP", 00:18:28.118 "adrfam": "IPv4", 00:18:28.118 "traddr": "10.0.0.1", 00:18:28.118 "trsvcid": "49164" 00:18:28.118 }, 00:18:28.118 "auth": { 00:18:28.118 "state": "completed", 00:18:28.118 "digest": "sha384", 00:18:28.118 "dhgroup": "ffdhe4096" 00:18:28.118 } 00:18:28.118 } 00:18:28.118 ]' 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.118 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.118 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.118 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.118 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.379 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:28.379 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.950 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.211 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.212 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.212 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.212 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.472 00:18:29.472 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.472 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.472 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.733 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.733 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.733 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.733 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.733 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.733 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.733 { 00:18:29.733 "cntlid": 81, 00:18:29.733 "qid": 0, 00:18:29.733 "state": "enabled", 00:18:29.733 "thread": "nvmf_tgt_poll_group_000", 00:18:29.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.733 "listen_address": { 00:18:29.733 "trtype": "TCP", 00:18:29.733 "adrfam": "IPv4", 00:18:29.733 "traddr": "10.0.0.2", 00:18:29.733 "trsvcid": "4420" 00:18:29.733 }, 00:18:29.733 "peer_address": { 00:18:29.733 "trtype": "TCP", 00:18:29.733 "adrfam": "IPv4", 00:18:29.733 "traddr": "10.0.0.1", 00:18:29.733 "trsvcid": "49198" 00:18:29.733 }, 00:18:29.733 "auth": { 00:18:29.733 "state": "completed", 00:18:29.733 "digest": "sha384", 00:18:29.733 "dhgroup": "ffdhe6144" 00:18:29.733 } 00:18:29.733 } 00:18:29.734 ]' 00:18:29.734 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.734 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.734 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.734 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.734 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.994 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.994 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.994 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.994 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:29.994 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:30.565 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.827 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.400 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.400 { 00:18:31.400 "cntlid": 83, 00:18:31.400 "qid": 0, 00:18:31.400 "state": "enabled", 00:18:31.400 "thread": "nvmf_tgt_poll_group_000", 00:18:31.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.400 "listen_address": { 00:18:31.400 "trtype": "TCP", 00:18:31.400 "adrfam": "IPv4", 00:18:31.400 "traddr": "10.0.0.2", 00:18:31.400 "trsvcid": "4420" 00:18:31.400 }, 00:18:31.400 "peer_address": { 00:18:31.400 "trtype": "TCP", 00:18:31.400 "adrfam": "IPv4", 00:18:31.400 "traddr": "10.0.0.1", 00:18:31.400 "trsvcid": "49230" 00:18:31.400 }, 00:18:31.400 "auth": { 00:18:31.400 "state": "completed", 00:18:31.400 "digest": "sha384", 00:18:31.400 "dhgroup": "ffdhe6144" 00:18:31.400 } 00:18:31.400 } 00:18:31.400 ]' 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.400 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.660 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.660 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.660 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.660 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.661 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.661 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:31.661 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.601 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.863 00:18:32.863 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.863 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.863 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.124 { 00:18:33.124 "cntlid": 85, 00:18:33.124 "qid": 0, 00:18:33.124 "state": "enabled", 00:18:33.124 "thread": "nvmf_tgt_poll_group_000", 00:18:33.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.124 "listen_address": { 00:18:33.124 "trtype": "TCP", 00:18:33.124 "adrfam": "IPv4", 00:18:33.124 "traddr": "10.0.0.2", 00:18:33.124 "trsvcid": "4420" 00:18:33.124 }, 00:18:33.124 "peer_address": { 00:18:33.124 "trtype": "TCP", 00:18:33.124 "adrfam": "IPv4", 00:18:33.124 "traddr": "10.0.0.1", 00:18:33.124 "trsvcid": "49250" 00:18:33.124 }, 00:18:33.124 "auth": { 00:18:33.124 "state": "completed", 00:18:33.124 "digest": "sha384", 00:18:33.124 "dhgroup": "ffdhe6144" 00:18:33.124 } 00:18:33.124 } 00:18:33.124 ]' 00:18:33.124 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.124 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.124 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:33.385 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.327 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.327 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.589 00:18:34.589 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.589 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.589 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.849 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.850 { 00:18:34.850 "cntlid": 87, 00:18:34.850 "qid": 0, 00:18:34.850 "state": "enabled", 00:18:34.850 "thread": "nvmf_tgt_poll_group_000", 00:18:34.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.850 "listen_address": { 00:18:34.850 "trtype": "TCP", 00:18:34.850 "adrfam": "IPv4", 00:18:34.850 "traddr": "10.0.0.2", 00:18:34.850 "trsvcid": "4420" 00:18:34.850 }, 00:18:34.850 "peer_address": { 00:18:34.850 "trtype": "TCP", 00:18:34.850 "adrfam": "IPv4", 00:18:34.850 "traddr": "10.0.0.1", 00:18:34.850 "trsvcid": "49286" 00:18:34.850 }, 00:18:34.850 "auth": { 00:18:34.850 "state": "completed", 00:18:34.850 "digest": "sha384", 00:18:34.850 "dhgroup": "ffdhe6144" 00:18:34.850 } 00:18:34.850 } 00:18:34.850 ]' 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.850 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.110 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.110 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.110 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.110 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:35.110 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.051 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.623 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.623 { 00:18:36.623 "cntlid": 89, 00:18:36.623 "qid": 0, 00:18:36.623 "state": "enabled", 00:18:36.623 "thread": "nvmf_tgt_poll_group_000", 00:18:36.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.623 "listen_address": { 00:18:36.623 "trtype": "TCP", 00:18:36.623 "adrfam": "IPv4", 00:18:36.623 "traddr": "10.0.0.2", 00:18:36.623 "trsvcid": "4420" 00:18:36.623 }, 00:18:36.623 "peer_address": { 00:18:36.623 "trtype": "TCP", 00:18:36.623 "adrfam": "IPv4", 00:18:36.623 "traddr": "10.0.0.1", 00:18:36.623 "trsvcid": "36806" 00:18:36.623 }, 00:18:36.623 "auth": { 00:18:36.623 "state": "completed", 00:18:36.623 "digest": "sha384", 00:18:36.623 "dhgroup": "ffdhe8192" 00:18:36.623 } 00:18:36.623 } 00:18:36.623 ]' 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.623 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.884 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.884 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.884 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.884 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.884 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.884 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.145 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:37.145 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.716 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.977 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.238 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.499 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.499 { 00:18:38.499 "cntlid": 91, 00:18:38.499 "qid": 0, 00:18:38.499 "state": "enabled", 00:18:38.499 "thread": "nvmf_tgt_poll_group_000", 00:18:38.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.499 "listen_address": { 00:18:38.499 "trtype": "TCP", 00:18:38.499 "adrfam": "IPv4", 00:18:38.499 "traddr": "10.0.0.2", 00:18:38.499 "trsvcid": "4420" 00:18:38.499 }, 00:18:38.499 "peer_address": { 00:18:38.499 "trtype": "TCP", 00:18:38.500 "adrfam": "IPv4", 00:18:38.500 "traddr": "10.0.0.1", 00:18:38.500 "trsvcid": "36822" 00:18:38.500 }, 00:18:38.500 "auth": { 00:18:38.500 "state": "completed", 00:18:38.500 "digest": "sha384", 00:18:38.500 "dhgroup": "ffdhe8192" 00:18:38.500 } 00:18:38.500 } 00:18:38.500 ]' 00:18:38.500 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.500 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.500 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.760 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.760 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.760 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.760 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.760 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.021 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:39.021 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:39.591 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.852 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.113 00:18:40.113 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.113 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.113 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.373 { 00:18:40.373 "cntlid": 93, 00:18:40.373 "qid": 0, 00:18:40.373 "state": "enabled", 00:18:40.373 "thread": "nvmf_tgt_poll_group_000", 00:18:40.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.373 "listen_address": { 00:18:40.373 "trtype": "TCP", 00:18:40.373 "adrfam": "IPv4", 00:18:40.373 "traddr": "10.0.0.2", 00:18:40.373 "trsvcid": "4420" 00:18:40.373 }, 00:18:40.373 "peer_address": { 00:18:40.373 "trtype": "TCP", 00:18:40.373 "adrfam": "IPv4", 00:18:40.373 "traddr": "10.0.0.1", 00:18:40.373 "trsvcid": "36856" 00:18:40.373 }, 00:18:40.373 "auth": { 00:18:40.373 "state": "completed", 00:18:40.373 "digest": "sha384", 00:18:40.373 "dhgroup": "ffdhe8192" 00:18:40.373 } 00:18:40.373 } 00:18:40.373 ]' 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.373 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:40.634 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.576 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.146 00:18:42.146 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.146 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.146 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.146 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.146 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.146 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.146 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.146 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.407 { 00:18:42.407 "cntlid": 95, 00:18:42.407 "qid": 0, 00:18:42.407 "state": "enabled", 00:18:42.407 "thread": "nvmf_tgt_poll_group_000", 00:18:42.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.407 "listen_address": { 00:18:42.407 "trtype": "TCP", 00:18:42.407 "adrfam": "IPv4", 00:18:42.407 "traddr": "10.0.0.2", 00:18:42.407 "trsvcid": "4420" 00:18:42.407 }, 00:18:42.407 "peer_address": { 00:18:42.407 "trtype": "TCP", 00:18:42.407 "adrfam": "IPv4", 00:18:42.407 "traddr": "10.0.0.1", 00:18:42.407 "trsvcid": "36872" 00:18:42.407 }, 00:18:42.407 "auth": { 00:18:42.407 "state": "completed", 00:18:42.407 "digest": "sha384", 00:18:42.407 "dhgroup": "ffdhe8192" 00:18:42.407 } 00:18:42.407 } 00:18:42.407 ]' 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.407 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.668 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:42.668 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.239 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.499 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.500 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.500 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.760 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.760 { 00:18:43.760 "cntlid": 97, 00:18:43.760 "qid": 0, 00:18:43.760 "state": "enabled", 00:18:43.760 "thread": "nvmf_tgt_poll_group_000", 00:18:43.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.760 "listen_address": { 00:18:43.760 "trtype": "TCP", 00:18:43.760 "adrfam": "IPv4", 00:18:43.760 "traddr": "10.0.0.2", 00:18:43.760 "trsvcid": "4420" 00:18:43.760 }, 00:18:43.760 "peer_address": { 00:18:43.760 "trtype": "TCP", 00:18:43.760 "adrfam": "IPv4", 00:18:43.760 "traddr": "10.0.0.1", 00:18:43.760 "trsvcid": "36896" 00:18:43.760 }, 00:18:43.760 "auth": { 00:18:43.761 "state": "completed", 00:18:43.761 "digest": "sha512", 00:18:43.761 "dhgroup": "null" 00:18:43.761 } 00:18:43.761 } 00:18:43.761 ]' 00:18:43.761 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.020 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.281 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:44.281 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.851 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.112 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.112 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.372 { 00:18:45.372 "cntlid": 99, 00:18:45.372 "qid": 0, 00:18:45.372 "state": "enabled", 00:18:45.372 "thread": "nvmf_tgt_poll_group_000", 00:18:45.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.372 "listen_address": { 00:18:45.372 "trtype": "TCP", 00:18:45.372 "adrfam": "IPv4", 00:18:45.372 "traddr": "10.0.0.2", 00:18:45.372 "trsvcid": "4420" 00:18:45.372 }, 00:18:45.372 "peer_address": { 00:18:45.372 "trtype": "TCP", 00:18:45.372 "adrfam": "IPv4", 00:18:45.372 "traddr": "10.0.0.1", 00:18:45.372 "trsvcid": "36924" 00:18:45.372 }, 00:18:45.372 "auth": { 00:18:45.372 "state": "completed", 00:18:45.372 "digest": "sha512", 00:18:45.372 "dhgroup": "null" 00:18:45.372 } 00:18:45.372 } 00:18:45.372 ]' 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.372 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:45.632 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.572 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.833 00:18:46.833 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.833 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.833 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.093 { 00:18:47.093 "cntlid": 101, 00:18:47.093 "qid": 0, 00:18:47.093 "state": "enabled", 00:18:47.093 "thread": "nvmf_tgt_poll_group_000", 00:18:47.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.093 "listen_address": { 00:18:47.093 "trtype": "TCP", 00:18:47.093 "adrfam": "IPv4", 00:18:47.093 "traddr": "10.0.0.2", 00:18:47.093 "trsvcid": "4420" 00:18:47.093 }, 00:18:47.093 "peer_address": { 00:18:47.093 "trtype": "TCP", 00:18:47.093 "adrfam": "IPv4", 00:18:47.093 "traddr": "10.0.0.1", 00:18:47.093 "trsvcid": "46556" 00:18:47.093 }, 00:18:47.093 "auth": { 00:18:47.093 "state": "completed", 00:18:47.093 "digest": "sha512", 00:18:47.093 "dhgroup": "null" 00:18:47.093 } 00:18:47.093 } 00:18:47.093 ]' 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.093 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.354 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:47.354 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.925 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.185 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.446 00:18:48.446 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.446 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.446 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.708 { 00:18:48.708 "cntlid": 103, 00:18:48.708 "qid": 0, 00:18:48.708 "state": "enabled", 00:18:48.708 "thread": "nvmf_tgt_poll_group_000", 00:18:48.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.708 "listen_address": { 00:18:48.708 "trtype": "TCP", 00:18:48.708 "adrfam": "IPv4", 00:18:48.708 "traddr": "10.0.0.2", 00:18:48.708 "trsvcid": "4420" 00:18:48.708 }, 00:18:48.708 "peer_address": { 00:18:48.708 "trtype": "TCP", 00:18:48.708 "adrfam": "IPv4", 00:18:48.708 "traddr": "10.0.0.1", 00:18:48.708 "trsvcid": "46568" 00:18:48.708 }, 00:18:48.708 "auth": { 00:18:48.708 "state": "completed", 00:18:48.708 "digest": "sha512", 00:18:48.708 "dhgroup": "null" 00:18:48.708 } 00:18:48.708 } 00:18:48.708 ]' 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.708 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.969 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:48.969 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.539 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.799 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.061 00:18:50.061 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.061 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.061 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.322 { 00:18:50.322 "cntlid": 105, 00:18:50.322 "qid": 0, 00:18:50.322 "state": "enabled", 00:18:50.322 "thread": "nvmf_tgt_poll_group_000", 00:18:50.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.322 "listen_address": { 00:18:50.322 "trtype": "TCP", 00:18:50.322 "adrfam": "IPv4", 00:18:50.322 "traddr": "10.0.0.2", 00:18:50.322 "trsvcid": "4420" 00:18:50.322 }, 00:18:50.322 "peer_address": { 00:18:50.322 "trtype": "TCP", 00:18:50.322 "adrfam": "IPv4", 00:18:50.322 "traddr": "10.0.0.1", 00:18:50.322 "trsvcid": "46596" 00:18:50.322 }, 00:18:50.322 "auth": { 00:18:50.322 "state": "completed", 00:18:50.322 "digest": "sha512", 00:18:50.322 "dhgroup": "ffdhe2048" 00:18:50.322 } 00:18:50.322 } 00:18:50.322 ]' 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.322 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.583 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:50.583 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.154 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.414 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.414 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.673 00:18:51.673 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.673 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.673 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.933 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.933 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.933 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.933 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.933 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.933 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.933 { 00:18:51.933 "cntlid": 107, 00:18:51.933 "qid": 0, 00:18:51.933 "state": "enabled", 00:18:51.934 "thread": "nvmf_tgt_poll_group_000", 00:18:51.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.934 "listen_address": { 00:18:51.934 "trtype": "TCP", 00:18:51.934 "adrfam": "IPv4", 00:18:51.934 "traddr": "10.0.0.2", 00:18:51.934 "trsvcid": "4420" 00:18:51.934 }, 00:18:51.934 "peer_address": { 00:18:51.934 "trtype": "TCP", 00:18:51.934 "adrfam": "IPv4", 00:18:51.934 "traddr": "10.0.0.1", 00:18:51.934 "trsvcid": "46626" 00:18:51.934 }, 00:18:51.934 "auth": { 00:18:51.934 "state": "completed", 00:18:51.934 "digest": "sha512", 00:18:51.934 "dhgroup": "ffdhe2048" 00:18:51.934 } 00:18:51.934 } 00:18:51.934 ]' 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.934 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.194 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:52.194 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.764 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.024 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.284 00:18:53.284 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.284 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.284 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.545 { 00:18:53.545 "cntlid": 109, 00:18:53.545 "qid": 0, 00:18:53.545 "state": "enabled", 00:18:53.545 "thread": "nvmf_tgt_poll_group_000", 00:18:53.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.545 "listen_address": { 00:18:53.545 "trtype": "TCP", 00:18:53.545 "adrfam": "IPv4", 00:18:53.545 "traddr": "10.0.0.2", 00:18:53.545 "trsvcid": "4420" 00:18:53.545 }, 00:18:53.545 "peer_address": { 00:18:53.545 "trtype": "TCP", 00:18:53.545 "adrfam": "IPv4", 00:18:53.545 "traddr": "10.0.0.1", 00:18:53.545 "trsvcid": "46650" 00:18:53.545 }, 00:18:53.545 "auth": { 00:18:53.545 "state": "completed", 00:18:53.545 "digest": "sha512", 00:18:53.545 "dhgroup": "ffdhe2048" 00:18:53.545 } 00:18:53.545 } 00:18:53.545 ]' 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.545 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.806 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:53.806 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.378 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.639 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.901 00:18:54.901 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.901 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.901 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.221 { 00:18:55.221 "cntlid": 111, 00:18:55.221 "qid": 0, 00:18:55.221 "state": "enabled", 00:18:55.221 "thread": "nvmf_tgt_poll_group_000", 00:18:55.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.221 "listen_address": { 00:18:55.221 "trtype": "TCP", 00:18:55.221 "adrfam": "IPv4", 00:18:55.221 "traddr": "10.0.0.2", 00:18:55.221 "trsvcid": "4420" 00:18:55.221 }, 00:18:55.221 "peer_address": { 00:18:55.221 "trtype": "TCP", 00:18:55.221 "adrfam": "IPv4", 00:18:55.221 "traddr": "10.0.0.1", 00:18:55.221 "trsvcid": "46674" 00:18:55.221 }, 00:18:55.221 "auth": { 00:18:55.221 "state": "completed", 00:18:55.221 "digest": "sha512", 00:18:55.221 "dhgroup": "ffdhe2048" 00:18:55.221 } 00:18:55.221 } 00:18:55.221 ]' 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.221 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.221 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.221 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.221 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.484 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:55.484 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.151 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.151 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.413 00:18:56.413 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.413 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.413 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.673 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.674 { 00:18:56.674 "cntlid": 113, 00:18:56.674 "qid": 0, 00:18:56.674 "state": "enabled", 00:18:56.674 "thread": "nvmf_tgt_poll_group_000", 00:18:56.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.674 "listen_address": { 00:18:56.674 "trtype": "TCP", 00:18:56.674 "adrfam": "IPv4", 00:18:56.674 "traddr": "10.0.0.2", 00:18:56.674 "trsvcid": "4420" 00:18:56.674 }, 00:18:56.674 "peer_address": { 00:18:56.674 "trtype": "TCP", 00:18:56.674 "adrfam": "IPv4", 00:18:56.674 "traddr": "10.0.0.1", 00:18:56.674 "trsvcid": "43984" 00:18:56.674 }, 00:18:56.674 "auth": { 00:18:56.674 "state": "completed", 00:18:56.674 "digest": "sha512", 00:18:56.674 "dhgroup": "ffdhe3072" 00:18:56.674 } 00:18:56.674 } 00:18:56.674 ]' 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.674 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.934 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:56.934 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:57.505 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.766 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.026 00:18:58.026 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.026 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.026 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.286 { 00:18:58.286 "cntlid": 115, 00:18:58.286 "qid": 0, 00:18:58.286 "state": "enabled", 00:18:58.286 "thread": "nvmf_tgt_poll_group_000", 00:18:58.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.286 "listen_address": { 00:18:58.286 "trtype": "TCP", 00:18:58.286 "adrfam": "IPv4", 00:18:58.286 "traddr": "10.0.0.2", 00:18:58.286 "trsvcid": "4420" 00:18:58.286 }, 00:18:58.286 "peer_address": { 00:18:58.286 "trtype": "TCP", 00:18:58.286 "adrfam": "IPv4", 00:18:58.286 "traddr": "10.0.0.1", 00:18:58.286 "trsvcid": "44016" 00:18:58.286 }, 00:18:58.286 "auth": { 00:18:58.286 "state": "completed", 00:18:58.286 "digest": "sha512", 00:18:58.286 "dhgroup": "ffdhe3072" 00:18:58.286 } 00:18:58.286 } 00:18:58.286 ]' 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.286 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.287 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.287 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.287 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.287 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.547 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:58.547 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.118 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.379 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.380 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.640 00:18:59.640 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.640 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.640 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.901 { 00:18:59.901 "cntlid": 117, 00:18:59.901 "qid": 0, 00:18:59.901 "state": "enabled", 00:18:59.901 "thread": "nvmf_tgt_poll_group_000", 00:18:59.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.901 "listen_address": { 00:18:59.901 "trtype": "TCP", 00:18:59.901 "adrfam": "IPv4", 00:18:59.901 "traddr": "10.0.0.2", 00:18:59.901 "trsvcid": "4420" 00:18:59.901 }, 00:18:59.901 "peer_address": { 00:18:59.901 "trtype": "TCP", 00:18:59.901 "adrfam": "IPv4", 00:18:59.901 "traddr": "10.0.0.1", 00:18:59.901 "trsvcid": "44046" 00:18:59.901 }, 00:18:59.901 "auth": { 00:18:59.901 "state": "completed", 00:18:59.901 "digest": "sha512", 00:18:59.901 "dhgroup": "ffdhe3072" 00:18:59.901 } 00:18:59.901 } 00:18:59.901 ]' 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.901 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.163 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:00.163 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:00.734 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.994 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.994 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.994 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.994 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.994 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.994 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.995 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.254 00:19:01.254 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.254 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.254 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.514 { 00:19:01.514 "cntlid": 119, 00:19:01.514 "qid": 0, 00:19:01.514 "state": "enabled", 00:19:01.514 "thread": "nvmf_tgt_poll_group_000", 00:19:01.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.514 "listen_address": { 00:19:01.514 "trtype": "TCP", 00:19:01.514 "adrfam": "IPv4", 00:19:01.514 "traddr": "10.0.0.2", 00:19:01.514 "trsvcid": "4420" 00:19:01.514 }, 00:19:01.514 "peer_address": { 00:19:01.514 "trtype": "TCP", 00:19:01.514 "adrfam": "IPv4", 00:19:01.514 "traddr": "10.0.0.1", 00:19:01.514 "trsvcid": "44064" 00:19:01.514 }, 00:19:01.514 "auth": { 00:19:01.514 "state": "completed", 00:19:01.514 "digest": "sha512", 00:19:01.514 "dhgroup": "ffdhe3072" 00:19:01.514 } 00:19:01.514 } 00:19:01.514 ]' 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.514 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.775 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:01.775 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.347 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.608 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.869 00:19:02.869 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.869 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.869 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.129 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.129 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.129 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.129 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.129 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.129 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.129 { 00:19:03.129 "cntlid": 121, 00:19:03.129 "qid": 0, 00:19:03.129 "state": "enabled", 00:19:03.129 "thread": "nvmf_tgt_poll_group_000", 00:19:03.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.129 "listen_address": { 00:19:03.129 "trtype": "TCP", 00:19:03.130 "adrfam": "IPv4", 00:19:03.130 "traddr": "10.0.0.2", 00:19:03.130 "trsvcid": "4420" 00:19:03.130 }, 00:19:03.130 "peer_address": { 00:19:03.130 "trtype": "TCP", 00:19:03.130 "adrfam": "IPv4", 00:19:03.130 "traddr": "10.0.0.1", 00:19:03.130 "trsvcid": "44096" 00:19:03.130 }, 00:19:03.130 "auth": { 00:19:03.130 "state": "completed", 00:19:03.130 "digest": "sha512", 00:19:03.130 "dhgroup": "ffdhe4096" 00:19:03.130 } 00:19:03.130 } 00:19:03.130 ]' 00:19:03.130 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.130 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.130 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.130 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.130 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.130 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.130 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.130 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.391 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:03.391 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.963 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.964 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.225 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.487 00:19:04.487 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.487 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.487 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.748 { 00:19:04.748 "cntlid": 123, 00:19:04.748 "qid": 0, 00:19:04.748 "state": "enabled", 00:19:04.748 "thread": "nvmf_tgt_poll_group_000", 00:19:04.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.748 "listen_address": { 00:19:04.748 "trtype": "TCP", 00:19:04.748 "adrfam": "IPv4", 00:19:04.748 "traddr": "10.0.0.2", 00:19:04.748 "trsvcid": "4420" 00:19:04.748 }, 00:19:04.748 "peer_address": { 00:19:04.748 "trtype": "TCP", 00:19:04.748 "adrfam": "IPv4", 00:19:04.748 "traddr": "10.0.0.1", 00:19:04.748 "trsvcid": "44116" 00:19:04.748 }, 00:19:04.748 "auth": { 00:19:04.748 "state": "completed", 00:19:04.748 "digest": "sha512", 00:19:04.748 "dhgroup": "ffdhe4096" 00:19:04.748 } 00:19:04.748 } 00:19:04.748 ]' 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.748 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.008 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:19:05.008 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.579 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.839 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.100 00:19:06.100 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.100 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.100 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.361 { 00:19:06.361 "cntlid": 125, 00:19:06.361 "qid": 0, 00:19:06.361 "state": "enabled", 00:19:06.361 "thread": "nvmf_tgt_poll_group_000", 00:19:06.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.361 "listen_address": { 00:19:06.361 "trtype": "TCP", 00:19:06.361 "adrfam": "IPv4", 00:19:06.361 "traddr": "10.0.0.2", 00:19:06.361 "trsvcid": "4420" 00:19:06.361 }, 00:19:06.361 "peer_address": { 00:19:06.361 "trtype": "TCP", 00:19:06.361 "adrfam": "IPv4", 00:19:06.361 "traddr": "10.0.0.1", 00:19:06.361 "trsvcid": "55292" 00:19:06.361 }, 00:19:06.361 "auth": { 00:19:06.361 "state": "completed", 00:19:06.361 "digest": "sha512", 00:19:06.361 "dhgroup": "ffdhe4096" 00:19:06.361 } 00:19:06.361 } 00:19:06.361 ]' 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.361 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.622 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:06.622 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:07.193 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.455 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.717 00:19:07.717 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.717 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.717 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.977 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.977 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.977 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.977 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.977 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.977 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.977 { 00:19:07.977 "cntlid": 127, 00:19:07.977 "qid": 0, 00:19:07.978 "state": "enabled", 00:19:07.978 "thread": "nvmf_tgt_poll_group_000", 00:19:07.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.978 "listen_address": { 00:19:07.978 "trtype": "TCP", 00:19:07.978 "adrfam": "IPv4", 00:19:07.978 "traddr": "10.0.0.2", 00:19:07.978 "trsvcid": "4420" 00:19:07.978 }, 00:19:07.978 "peer_address": { 00:19:07.978 "trtype": "TCP", 00:19:07.978 "adrfam": "IPv4", 00:19:07.978 "traddr": "10.0.0.1", 00:19:07.978 "trsvcid": "55318" 00:19:07.978 }, 00:19:07.978 "auth": { 00:19:07.978 "state": "completed", 00:19:07.978 "digest": "sha512", 00:19:07.978 "dhgroup": "ffdhe4096" 00:19:07.978 } 00:19:07.978 } 00:19:07.978 ]' 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.978 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.296 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:08.296 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.870 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.130 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.131 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.391 00:19:09.391 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.391 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.391 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.652 { 00:19:09.652 "cntlid": 129, 00:19:09.652 "qid": 0, 00:19:09.652 "state": "enabled", 00:19:09.652 "thread": "nvmf_tgt_poll_group_000", 00:19:09.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.652 "listen_address": { 00:19:09.652 "trtype": "TCP", 00:19:09.652 "adrfam": "IPv4", 00:19:09.652 "traddr": "10.0.0.2", 00:19:09.652 "trsvcid": "4420" 00:19:09.652 }, 00:19:09.652 "peer_address": { 00:19:09.652 "trtype": "TCP", 00:19:09.652 "adrfam": "IPv4", 00:19:09.652 "traddr": "10.0.0.1", 00:19:09.652 "trsvcid": "55346" 00:19:09.652 }, 00:19:09.652 "auth": { 00:19:09.652 "state": "completed", 00:19:09.652 "digest": "sha512", 00:19:09.652 "dhgroup": "ffdhe6144" 00:19:09.652 } 00:19:09.652 } 00:19:09.652 ]' 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.652 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.913 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:09.913 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:10.483 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.743 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.315 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.315 { 00:19:11.315 "cntlid": 131, 00:19:11.315 "qid": 0, 00:19:11.315 "state": "enabled", 00:19:11.315 "thread": "nvmf_tgt_poll_group_000", 00:19:11.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.315 "listen_address": { 00:19:11.315 "trtype": "TCP", 00:19:11.315 "adrfam": "IPv4", 00:19:11.315 "traddr": "10.0.0.2", 00:19:11.315 "trsvcid": "4420" 00:19:11.315 }, 00:19:11.315 "peer_address": { 00:19:11.315 "trtype": "TCP", 00:19:11.315 "adrfam": "IPv4", 00:19:11.315 "traddr": "10.0.0.1", 00:19:11.315 "trsvcid": "55372" 00:19:11.315 }, 00:19:11.315 "auth": { 00:19:11.315 "state": "completed", 00:19:11.315 "digest": "sha512", 00:19:11.315 "dhgroup": "ffdhe6144" 00:19:11.315 } 00:19:11.315 } 00:19:11.315 ]' 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.315 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:19:11.576 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.518 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.778 00:19:12.778 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.778 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.778 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.039 { 00:19:13.039 "cntlid": 133, 00:19:13.039 "qid": 0, 00:19:13.039 "state": "enabled", 00:19:13.039 "thread": "nvmf_tgt_poll_group_000", 00:19:13.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.039 "listen_address": { 00:19:13.039 "trtype": "TCP", 00:19:13.039 "adrfam": "IPv4", 00:19:13.039 "traddr": "10.0.0.2", 00:19:13.039 "trsvcid": "4420" 00:19:13.039 }, 00:19:13.039 "peer_address": { 00:19:13.039 "trtype": "TCP", 00:19:13.039 "adrfam": "IPv4", 00:19:13.039 "traddr": "10.0.0.1", 00:19:13.039 "trsvcid": "55408" 00:19:13.039 }, 00:19:13.039 "auth": { 00:19:13.039 "state": "completed", 00:19:13.039 "digest": "sha512", 00:19:13.039 "dhgroup": "ffdhe6144" 00:19:13.039 } 00:19:13.039 } 00:19:13.039 ]' 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.039 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:13.300 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.243 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.243 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.504 00:19:14.504 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.504 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.504 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.766 { 00:19:14.766 "cntlid": 135, 00:19:14.766 "qid": 0, 00:19:14.766 "state": "enabled", 00:19:14.766 "thread": "nvmf_tgt_poll_group_000", 00:19:14.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.766 "listen_address": { 00:19:14.766 "trtype": "TCP", 00:19:14.766 "adrfam": "IPv4", 00:19:14.766 "traddr": "10.0.0.2", 00:19:14.766 "trsvcid": "4420" 00:19:14.766 }, 00:19:14.766 "peer_address": { 00:19:14.766 "trtype": "TCP", 00:19:14.766 "adrfam": "IPv4", 00:19:14.766 "traddr": "10.0.0.1", 00:19:14.766 "trsvcid": "55446" 00:19:14.766 }, 00:19:14.766 "auth": { 00:19:14.766 "state": "completed", 00:19:14.766 "digest": "sha512", 00:19:14.766 "dhgroup": "ffdhe6144" 00:19:14.766 } 00:19:14.766 } 00:19:14.766 ]' 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.766 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.028 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.028 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.028 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.028 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:15.028 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.969 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.542 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.542 { 00:19:16.542 "cntlid": 137, 00:19:16.542 "qid": 0, 00:19:16.542 "state": "enabled", 00:19:16.542 "thread": "nvmf_tgt_poll_group_000", 00:19:16.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:16.542 "listen_address": { 00:19:16.542 "trtype": "TCP", 00:19:16.542 "adrfam": "IPv4", 00:19:16.542 "traddr": "10.0.0.2", 00:19:16.542 "trsvcid": "4420" 00:19:16.542 }, 00:19:16.542 "peer_address": { 00:19:16.542 "trtype": "TCP", 00:19:16.542 "adrfam": "IPv4", 00:19:16.542 "traddr": "10.0.0.1", 00:19:16.542 "trsvcid": "54992" 00:19:16.542 }, 00:19:16.542 "auth": { 00:19:16.542 "state": "completed", 00:19:16.542 "digest": "sha512", 00:19:16.542 "dhgroup": "ffdhe8192" 00:19:16.542 } 00:19:16.542 } 00:19:16.542 ]' 00:19:16.542 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.803 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.064 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:17.064 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.635 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.896 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.468 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.468 { 00:19:18.468 "cntlid": 139, 00:19:18.468 "qid": 0, 00:19:18.468 "state": "enabled", 00:19:18.468 "thread": "nvmf_tgt_poll_group_000", 00:19:18.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.468 "listen_address": { 00:19:18.468 "trtype": "TCP", 00:19:18.468 "adrfam": "IPv4", 00:19:18.468 "traddr": "10.0.0.2", 00:19:18.468 "trsvcid": "4420" 00:19:18.468 }, 00:19:18.468 "peer_address": { 00:19:18.468 "trtype": "TCP", 00:19:18.468 "adrfam": "IPv4", 00:19:18.468 "traddr": "10.0.0.1", 00:19:18.468 "trsvcid": "55020" 00:19:18.468 }, 00:19:18.468 "auth": { 00:19:18.468 "state": "completed", 00:19:18.468 "digest": "sha512", 00:19:18.468 "dhgroup": "ffdhe8192" 00:19:18.468 } 00:19:18.468 } 00:19:18.468 ]' 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.468 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:19:18.731 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: --dhchap-ctrl-secret DHHC-1:02:Y2YxMTcxMmE1Nzk3ODk4OWEwZDRkNTE3MjA1NWIzZDUzMGM2M2Y2ZjllNTQ3Y2Fj2NQJnQ==: 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.673 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.244 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.505 { 00:19:20.505 "cntlid": 141, 00:19:20.505 "qid": 0, 00:19:20.505 "state": "enabled", 00:19:20.505 "thread": "nvmf_tgt_poll_group_000", 00:19:20.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.505 "listen_address": { 00:19:20.505 "trtype": "TCP", 00:19:20.505 "adrfam": "IPv4", 00:19:20.505 "traddr": "10.0.0.2", 00:19:20.505 "trsvcid": "4420" 00:19:20.505 }, 00:19:20.505 "peer_address": { 00:19:20.505 "trtype": "TCP", 00:19:20.505 "adrfam": "IPv4", 00:19:20.505 "traddr": "10.0.0.1", 00:19:20.505 "trsvcid": "55042" 00:19:20.505 }, 00:19:20.505 "auth": { 00:19:20.505 "state": "completed", 00:19:20.505 "digest": "sha512", 00:19:20.505 "dhgroup": "ffdhe8192" 00:19:20.505 } 00:19:20.505 } 00:19:20.505 ]' 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.505 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.765 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:20.765 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:01:YzkzMmE2ODQ1YzFjYzk1OWVmMDU3NTIwM2MyY2EyNGT38soj: 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.338 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.599 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.171 00:19:22.171 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.171 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.171 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.171 { 00:19:22.171 "cntlid": 143, 00:19:22.171 "qid": 0, 00:19:22.171 "state": "enabled", 00:19:22.171 "thread": "nvmf_tgt_poll_group_000", 00:19:22.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:22.171 "listen_address": { 00:19:22.171 "trtype": "TCP", 00:19:22.171 "adrfam": "IPv4", 00:19:22.171 "traddr": "10.0.0.2", 00:19:22.171 "trsvcid": "4420" 00:19:22.171 }, 00:19:22.171 "peer_address": { 00:19:22.171 "trtype": "TCP", 00:19:22.171 "adrfam": "IPv4", 00:19:22.171 "traddr": "10.0.0.1", 00:19:22.171 "trsvcid": "55064" 00:19:22.171 }, 00:19:22.171 "auth": { 00:19:22.171 "state": "completed", 00:19:22.171 "digest": "sha512", 00:19:22.171 "dhgroup": "ffdhe8192" 00:19:22.171 } 00:19:22.171 } 00:19:22.171 ]' 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.171 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.432 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.432 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.432 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.432 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.432 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.693 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:22.693 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.263 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.524 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.803 00:19:23.803 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.803 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.803 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.063 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.063 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.063 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.063 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.063 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.063 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.063 { 00:19:24.063 "cntlid": 145, 00:19:24.063 "qid": 0, 00:19:24.063 "state": "enabled", 00:19:24.063 "thread": "nvmf_tgt_poll_group_000", 00:19:24.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.063 "listen_address": { 00:19:24.063 "trtype": "TCP", 00:19:24.063 "adrfam": "IPv4", 00:19:24.063 "traddr": "10.0.0.2", 00:19:24.063 "trsvcid": "4420" 00:19:24.063 }, 00:19:24.063 "peer_address": { 00:19:24.063 "trtype": "TCP", 00:19:24.063 "adrfam": "IPv4", 00:19:24.063 "traddr": "10.0.0.1", 00:19:24.064 "trsvcid": "55096" 00:19:24.064 }, 00:19:24.064 "auth": { 00:19:24.064 "state": "completed", 00:19:24.064 "digest": "sha512", 00:19:24.064 "dhgroup": "ffdhe8192" 00:19:24.064 } 00:19:24.064 } 00:19:24.064 ]' 00:19:24.064 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.064 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.064 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.064 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.325 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.325 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.325 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.325 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.325 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:24.325 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTQzODNiNjFiNTMwODBlZTI0MTliNTQ4MmE0MjBmM2YyMWU0M2YxOTFlNjRjYzE0ytg3PA==: --dhchap-ctrl-secret DHHC-1:03:YzAzYWJiYjgwYTc2NDQzOTQ4M2U1NDg4NTJkZjNhMDE0ZTY5NzAxYmFjMjlkZjUzODdlYmYyMDQ3MzhkZWIxMpUjPms=: 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:25.268 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:25.530 request: 00:19:25.530 { 00:19:25.530 "name": "nvme0", 00:19:25.530 "trtype": "tcp", 00:19:25.530 "traddr": "10.0.0.2", 00:19:25.530 "adrfam": "ipv4", 00:19:25.530 "trsvcid": "4420", 00:19:25.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:25.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:25.530 "prchk_reftag": false, 00:19:25.530 "prchk_guard": false, 00:19:25.530 "hdgst": false, 00:19:25.530 "ddgst": false, 00:19:25.530 "dhchap_key": "key2", 00:19:25.530 "allow_unrecognized_csi": false, 00:19:25.530 "method": "bdev_nvme_attach_controller", 00:19:25.530 "req_id": 1 00:19:25.530 } 00:19:25.530 Got JSON-RPC error response 00:19:25.530 response: 00:19:25.530 { 00:19:25.530 "code": -5, 00:19:25.530 "message": "Input/output error" 00:19:25.530 } 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:25.530 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.102 request: 00:19:26.102 { 00:19:26.102 "name": "nvme0", 00:19:26.102 "trtype": "tcp", 00:19:26.102 "traddr": "10.0.0.2", 00:19:26.102 "adrfam": "ipv4", 00:19:26.102 "trsvcid": "4420", 00:19:26.102 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.102 "prchk_reftag": false, 00:19:26.103 "prchk_guard": false, 00:19:26.103 "hdgst": false, 00:19:26.103 "ddgst": false, 00:19:26.103 "dhchap_key": "key1", 00:19:26.103 "dhchap_ctrlr_key": "ckey2", 00:19:26.103 "allow_unrecognized_csi": false, 00:19:26.103 "method": "bdev_nvme_attach_controller", 00:19:26.103 "req_id": 1 00:19:26.103 } 00:19:26.103 Got JSON-RPC error response 00:19:26.103 response: 00:19:26.103 { 00:19:26.103 "code": -5, 00:19:26.103 "message": "Input/output error" 00:19:26.103 } 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.103 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.364 request: 00:19:26.364 { 00:19:26.364 "name": "nvme0", 00:19:26.364 "trtype": "tcp", 00:19:26.364 "traddr": "10.0.0.2", 00:19:26.364 "adrfam": "ipv4", 00:19:26.364 "trsvcid": "4420", 00:19:26.364 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.364 "prchk_reftag": false, 00:19:26.364 "prchk_guard": false, 00:19:26.364 "hdgst": false, 00:19:26.364 "ddgst": false, 00:19:26.364 "dhchap_key": "key1", 00:19:26.364 "dhchap_ctrlr_key": "ckey1", 00:19:26.364 "allow_unrecognized_csi": false, 00:19:26.364 "method": "bdev_nvme_attach_controller", 00:19:26.364 "req_id": 1 00:19:26.364 } 00:19:26.364 Got JSON-RPC error response 00:19:26.364 response: 00:19:26.364 { 00:19:26.364 "code": -5, 00:19:26.364 "message": "Input/output error" 00:19:26.364 } 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 574176 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 574176 ']' 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 574176 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.364 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574176 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574176' 00:19:26.625 killing process with pid 574176 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 574176 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 574176 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=600386 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 600386 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 600386 ']' 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.625 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.626 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.626 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.626 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 600386 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 600386 ']' 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.569 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 null0 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vGA 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.4e8 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4e8 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.D4f 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.YLS ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YLS 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.j9m 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kSE ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kSE 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iie 00:19:27.830 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.831 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.772 nvme0n1 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.772 { 00:19:28.772 "cntlid": 1, 00:19:28.772 "qid": 0, 00:19:28.772 "state": "enabled", 00:19:28.772 "thread": "nvmf_tgt_poll_group_000", 00:19:28.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.772 "listen_address": { 00:19:28.772 "trtype": "TCP", 00:19:28.772 "adrfam": "IPv4", 00:19:28.772 "traddr": "10.0.0.2", 00:19:28.772 "trsvcid": "4420" 00:19:28.772 }, 00:19:28.772 "peer_address": { 00:19:28.772 "trtype": "TCP", 00:19:28.772 "adrfam": "IPv4", 00:19:28.772 "traddr": "10.0.0.1", 00:19:28.772 "trsvcid": "40126" 00:19:28.772 }, 00:19:28.772 "auth": { 00:19:28.772 "state": "completed", 00:19:28.772 "digest": "sha512", 00:19:28.772 "dhgroup": "ffdhe8192" 00:19:28.772 } 00:19:28.772 } 00:19:28.772 ]' 00:19:28.772 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.773 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.773 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.033 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.033 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.033 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.033 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.033 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.294 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:29.294 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:29.865 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.126 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.126 request: 00:19:30.126 { 00:19:30.126 "name": "nvme0", 00:19:30.126 "trtype": "tcp", 00:19:30.126 "traddr": "10.0.0.2", 00:19:30.126 "adrfam": "ipv4", 00:19:30.126 "trsvcid": "4420", 00:19:30.126 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:30.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.126 "prchk_reftag": false, 00:19:30.126 "prchk_guard": false, 00:19:30.126 "hdgst": false, 00:19:30.126 "ddgst": false, 00:19:30.126 "dhchap_key": "key3", 00:19:30.126 "allow_unrecognized_csi": false, 00:19:30.126 "method": "bdev_nvme_attach_controller", 00:19:30.126 "req_id": 1 00:19:30.126 } 00:19:30.126 Got JSON-RPC error response 00:19:30.126 response: 00:19:30.126 { 00:19:30.126 "code": -5, 00:19:30.126 "message": "Input/output error" 00:19:30.126 } 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:30.126 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.389 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.649 request: 00:19:30.650 { 00:19:30.650 "name": "nvme0", 00:19:30.650 "trtype": "tcp", 00:19:30.650 "traddr": "10.0.0.2", 00:19:30.650 "adrfam": "ipv4", 00:19:30.650 "trsvcid": "4420", 00:19:30.650 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:30.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.650 "prchk_reftag": false, 00:19:30.650 "prchk_guard": false, 00:19:30.650 "hdgst": false, 00:19:30.650 "ddgst": false, 00:19:30.650 "dhchap_key": "key3", 00:19:30.650 "allow_unrecognized_csi": false, 00:19:30.650 "method": "bdev_nvme_attach_controller", 00:19:30.650 "req_id": 1 00:19:30.650 } 00:19:30.650 Got JSON-RPC error response 00:19:30.650 response: 00:19:30.650 { 00:19:30.650 "code": -5, 00:19:30.650 "message": "Input/output error" 00:19:30.650 } 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:30.650 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:31.221 request: 00:19:31.221 { 00:19:31.221 "name": "nvme0", 00:19:31.221 "trtype": "tcp", 00:19:31.221 "traddr": "10.0.0.2", 00:19:31.221 "adrfam": "ipv4", 00:19:31.221 "trsvcid": "4420", 00:19:31.221 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.221 "prchk_reftag": false, 00:19:31.221 "prchk_guard": false, 00:19:31.221 "hdgst": false, 00:19:31.221 "ddgst": false, 00:19:31.221 "dhchap_key": "key0", 00:19:31.221 "dhchap_ctrlr_key": "key1", 00:19:31.221 "allow_unrecognized_csi": false, 00:19:31.221 "method": "bdev_nvme_attach_controller", 00:19:31.221 "req_id": 1 00:19:31.221 } 00:19:31.221 Got JSON-RPC error response 00:19:31.221 response: 00:19:31.221 { 00:19:31.221 "code": -5, 00:19:31.221 "message": "Input/output error" 00:19:31.221 } 00:19:31.221 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:31.221 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.221 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.221 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.221 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:31.222 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:31.222 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:31.222 nvme0n1 00:19:31.482 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:31.482 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:31.482 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.482 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.482 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.482 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:31.743 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:32.314 nvme0n1 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:32.575 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.835 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.835 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:32.835 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: --dhchap-ctrl-secret DHHC-1:03:YTQzYzY5ZGU4ZDJlMGYyOGM0NDAzMzE3MTZjMzYwZDAzYWJhZDEwMDEzYzg2YWZjM2MzY2Q4Zjc5ZTdmOWY2Yfg5aAI=: 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.405 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:33.666 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:34.238 request: 00:19:34.238 { 00:19:34.238 "name": "nvme0", 00:19:34.238 "trtype": "tcp", 00:19:34.238 "traddr": "10.0.0.2", 00:19:34.238 "adrfam": "ipv4", 00:19:34.238 "trsvcid": "4420", 00:19:34.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.238 "prchk_reftag": false, 00:19:34.238 "prchk_guard": false, 00:19:34.238 "hdgst": false, 00:19:34.238 "ddgst": false, 00:19:34.238 "dhchap_key": "key1", 00:19:34.238 "allow_unrecognized_csi": false, 00:19:34.238 "method": "bdev_nvme_attach_controller", 00:19:34.238 "req_id": 1 00:19:34.238 } 00:19:34.238 Got JSON-RPC error response 00:19:34.238 response: 00:19:34.238 { 00:19:34.238 "code": -5, 00:19:34.238 "message": "Input/output error" 00:19:34.238 } 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.238 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.810 nvme0n1 00:19:34.810 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:34.810 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:34.810 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.070 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.070 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.070 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:35.332 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:35.593 nvme0n1 00:19:35.593 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:35.593 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:35.593 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.593 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.593 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.593 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: '' 2s 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: ]] 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YThiYTczZGU0NDIyODMxMjM5MzU0MDAyNGYyMWM2ODLTuhh5: 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:35.854 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:38.397 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:38.397 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:38.397 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: 2s 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: ]] 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTRmOTgyYjZhYTc3NDc1OGU3NmU2OGNmMmJlOWE4NWVjNzMzMzhhYTcxNDlkZDZh99jNBQ==: 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:38.398 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.310 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.881 nvme0n1 00:19:40.881 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.881 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.881 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.881 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.881 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.881 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.142 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:41.142 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:41.142 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:41.401 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:41.660 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:41.660 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:41.660 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.920 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:42.181 request: 00:19:42.181 { 00:19:42.181 "name": "nvme0", 00:19:42.181 "dhchap_key": "key1", 00:19:42.181 "dhchap_ctrlr_key": "key3", 00:19:42.181 "method": "bdev_nvme_set_keys", 00:19:42.181 "req_id": 1 00:19:42.181 } 00:19:42.181 Got JSON-RPC error response 00:19:42.181 response: 00:19:42.181 { 00:19:42.181 "code": -13, 00:19:42.181 "message": "Permission denied" 00:19:42.181 } 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:42.181 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.441 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:42.441 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:43.381 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:43.381 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:43.381 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:43.641 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.582 nvme0n1 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:44.582 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:44.843 request: 00:19:44.843 { 00:19:44.843 "name": "nvme0", 00:19:44.843 "dhchap_key": "key2", 00:19:44.843 "dhchap_ctrlr_key": "key0", 00:19:44.843 "method": "bdev_nvme_set_keys", 00:19:44.843 "req_id": 1 00:19:44.843 } 00:19:44.843 Got JSON-RPC error response 00:19:44.843 response: 00:19:44.843 { 00:19:44.843 "code": -13, 00:19:44.843 "message": "Permission denied" 00:19:44.843 } 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.843 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:45.103 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:45.103 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:46.046 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:46.046 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:46.046 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 574701 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 574701 ']' 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 574701 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574701 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.306 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.307 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574701' 00:19:46.307 killing process with pid 574701 00:19:46.307 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 574701 00:19:46.307 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 574701 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.568 rmmod nvme_tcp 00:19:46.568 rmmod nvme_fabrics 00:19:46.568 rmmod nvme_keyring 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 600386 ']' 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 600386 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 600386 ']' 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 600386 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600386 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600386' 00:19:46.568 killing process with pid 600386 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 600386 00:19:46.568 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 600386 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.829 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.743 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:48.743 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vGA /tmp/spdk.key-sha256.D4f /tmp/spdk.key-sha384.j9m /tmp/spdk.key-sha512.iie /tmp/spdk.key-sha512.4e8 /tmp/spdk.key-sha384.YLS /tmp/spdk.key-sha256.kSE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:48.743 00:19:48.743 real 2m36.773s 00:19:48.743 user 5m52.647s 00:19:48.744 sys 0m25.105s 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.744 ************************************ 00:19:48.744 END TEST nvmf_auth_target 00:19:48.744 ************************************ 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.744 15:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.007 ************************************ 00:19:49.007 START TEST nvmf_bdevio_no_huge 00:19:49.007 ************************************ 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:49.007 * Looking for test storage... 00:19:49.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.007 --rc genhtml_branch_coverage=1 00:19:49.007 --rc genhtml_function_coverage=1 00:19:49.007 --rc genhtml_legend=1 00:19:49.007 --rc geninfo_all_blocks=1 00:19:49.007 --rc geninfo_unexecuted_blocks=1 00:19:49.007 00:19:49.007 ' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.007 --rc genhtml_branch_coverage=1 00:19:49.007 --rc genhtml_function_coverage=1 00:19:49.007 --rc genhtml_legend=1 00:19:49.007 --rc geninfo_all_blocks=1 00:19:49.007 --rc geninfo_unexecuted_blocks=1 00:19:49.007 00:19:49.007 ' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.007 --rc genhtml_branch_coverage=1 00:19:49.007 --rc genhtml_function_coverage=1 00:19:49.007 --rc genhtml_legend=1 00:19:49.007 --rc geninfo_all_blocks=1 00:19:49.007 --rc geninfo_unexecuted_blocks=1 00:19:49.007 00:19:49.007 ' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.007 --rc genhtml_branch_coverage=1 00:19:49.007 --rc genhtml_function_coverage=1 00:19:49.007 --rc genhtml_legend=1 00:19:49.007 --rc geninfo_all_blocks=1 00:19:49.007 --rc geninfo_unexecuted_blocks=1 00:19:49.007 00:19:49.007 ' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.007 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:49.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.008 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.269 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:49.269 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:49.269 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:49.269 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.413 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:57.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:57.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:57.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:57.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:57.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:19:57.414 00:19:57.414 --- 10.0.0.2 ping statistics --- 00:19:57.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.414 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:19:57.414 00:19:57.414 --- 10.0.0.1 ping statistics --- 00:19:57.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.414 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.414 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=608554 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 608554 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 608554 ']' 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.415 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.415 [2024-11-20 15:29:45.515000] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:19:57.415 [2024-11-20 15:29:45.515074] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:57.415 [2024-11-20 15:29:45.622130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.415 [2024-11-20 15:29:45.683212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.415 [2024-11-20 15:29:45.683261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.415 [2024-11-20 15:29:45.683271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.415 [2024-11-20 15:29:45.683278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.415 [2024-11-20 15:29:45.683285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.415 [2024-11-20 15:29:45.684798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:57.415 [2024-11-20 15:29:45.684959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:57.415 [2024-11-20 15:29:45.685118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.415 [2024-11-20 15:29:45.685119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:57.415 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.415 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:57.415 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.415 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.415 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.676 [2024-11-20 15:29:46.394069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.676 Malloc0 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.676 [2024-11-20 15:29:46.447897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:57.676 { 00:19:57.676 "params": { 00:19:57.676 "name": "Nvme$subsystem", 00:19:57.676 "trtype": "$TEST_TRANSPORT", 00:19:57.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.676 "adrfam": "ipv4", 00:19:57.676 "trsvcid": "$NVMF_PORT", 00:19:57.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.676 "hdgst": ${hdgst:-false}, 00:19:57.676 "ddgst": ${ddgst:-false} 00:19:57.676 }, 00:19:57.676 "method": "bdev_nvme_attach_controller" 00:19:57.676 } 00:19:57.676 EOF 00:19:57.676 )") 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:57.676 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:57.676 "params": { 00:19:57.676 "name": "Nvme1", 00:19:57.676 "trtype": "tcp", 00:19:57.676 "traddr": "10.0.0.2", 00:19:57.676 "adrfam": "ipv4", 00:19:57.676 "trsvcid": "4420", 00:19:57.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.676 "hdgst": false, 00:19:57.676 "ddgst": false 00:19:57.676 }, 00:19:57.676 "method": "bdev_nvme_attach_controller" 00:19:57.676 }' 00:19:57.676 [2024-11-20 15:29:46.506751] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:19:57.676 [2024-11-20 15:29:46.506822] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid608888 ] 00:19:57.676 [2024-11-20 15:29:46.604535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.936 [2024-11-20 15:29:46.664454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.936 [2024-11-20 15:29:46.664712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.936 [2024-11-20 15:29:46.664714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.936 I/O targets: 00:19:57.936 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:57.936 00:19:57.936 00:19:57.936 CUnit - A unit testing framework for C - Version 2.1-3 00:19:57.936 http://cunit.sourceforge.net/ 00:19:57.936 00:19:57.936 00:19:57.936 Suite: bdevio tests on: Nvme1n1 00:19:58.197 Test: blockdev write read block ...passed 00:19:58.197 Test: blockdev write zeroes read block ...passed 00:19:58.197 Test: blockdev write zeroes read no split ...passed 00:19:58.197 Test: blockdev write zeroes read split ...passed 00:19:58.197 Test: blockdev write zeroes read split partial ...passed 00:19:58.197 Test: blockdev reset ...[2024-11-20 15:29:47.074870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:58.197 [2024-11-20 15:29:47.074978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa79800 (9): Bad file descriptor 00:19:58.197 [2024-11-20 15:29:47.087537] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:58.197 passed 00:19:58.197 Test: blockdev write read 8 blocks ...passed 00:19:58.197 Test: blockdev write read size > 128k ...passed 00:19:58.197 Test: blockdev write read invalid size ...passed 00:19:58.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:58.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:58.197 Test: blockdev write read max offset ...passed 00:19:58.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:58.458 Test: blockdev writev readv 8 blocks ...passed 00:19:58.458 Test: blockdev writev readv 30 x 1block ...passed 00:19:58.458 Test: blockdev writev readv block ...passed 00:19:58.458 Test: blockdev writev readv size > 128k ...passed 00:19:58.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:58.458 Test: blockdev comparev and writev ...[2024-11-20 15:29:47.273614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.273666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.458 [2024-11-20 15:29:47.273683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.273692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:58.458 [2024-11-20 15:29:47.274253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.274272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:58.458 [2024-11-20 15:29:47.274288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.274298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:58.458 [2024-11-20 15:29:47.274834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.274849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:58.458 [2024-11-20 15:29:47.274863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.274874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:58.458 [2024-11-20 15:29:47.275437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.458 [2024-11-20 15:29:47.275453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:58.459 [2024-11-20 15:29:47.275469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.459 [2024-11-20 15:29:47.275480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:58.459 passed 00:19:58.459 Test: blockdev nvme passthru rw ...passed 00:19:58.459 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:29:47.360053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.459 [2024-11-20 15:29:47.360074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:58.459 [2024-11-20 15:29:47.360442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.459 [2024-11-20 15:29:47.360454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:58.459 [2024-11-20 15:29:47.360817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.459 [2024-11-20 15:29:47.360831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:58.459 [2024-11-20 15:29:47.361223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.459 [2024-11-20 15:29:47.361237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:58.459 passed 00:19:58.459 Test: blockdev nvme admin passthru ...passed 00:19:58.719 Test: blockdev copy ...passed 00:19:58.719 00:19:58.719 Run Summary: Type Total Ran Passed Failed Inactive 00:19:58.719 suites 1 1 n/a 0 0 00:19:58.719 tests 23 23 23 0 0 00:19:58.719 asserts 152 152 152 0 n/a 00:19:58.719 00:19:58.719 Elapsed time = 1.068 seconds 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:58.980 rmmod nvme_tcp 00:19:58.980 rmmod nvme_fabrics 00:19:58.980 rmmod nvme_keyring 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 608554 ']' 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 608554 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 608554 ']' 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 608554 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608554 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608554' 00:19:58.980 killing process with pid 608554 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 608554 00:19:58.980 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 608554 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.240 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.787 00:20:01.787 real 0m12.473s 00:20:01.787 user 0m13.878s 00:20:01.787 sys 0m6.739s 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.787 ************************************ 00:20:01.787 END TEST nvmf_bdevio_no_huge 00:20:01.787 ************************************ 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.787 ************************************ 00:20:01.787 START TEST nvmf_tls 00:20:01.787 ************************************ 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:01.787 * Looking for test storage... 00:20:01.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.787 --rc genhtml_branch_coverage=1 00:20:01.787 --rc genhtml_function_coverage=1 00:20:01.787 --rc genhtml_legend=1 00:20:01.787 --rc geninfo_all_blocks=1 00:20:01.787 --rc geninfo_unexecuted_blocks=1 00:20:01.787 00:20:01.787 ' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.787 --rc genhtml_branch_coverage=1 00:20:01.787 --rc genhtml_function_coverage=1 00:20:01.787 --rc genhtml_legend=1 00:20:01.787 --rc geninfo_all_blocks=1 00:20:01.787 --rc geninfo_unexecuted_blocks=1 00:20:01.787 00:20:01.787 ' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.787 --rc genhtml_branch_coverage=1 00:20:01.787 --rc genhtml_function_coverage=1 00:20:01.787 --rc genhtml_legend=1 00:20:01.787 --rc geninfo_all_blocks=1 00:20:01.787 --rc geninfo_unexecuted_blocks=1 00:20:01.787 00:20:01.787 ' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.787 --rc genhtml_branch_coverage=1 00:20:01.787 --rc genhtml_function_coverage=1 00:20:01.787 --rc genhtml_legend=1 00:20:01.787 --rc geninfo_all_blocks=1 00:20:01.787 --rc geninfo_unexecuted_blocks=1 00:20:01.787 00:20:01.787 ' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.787 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.788 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.926 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:09.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:09.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:09.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:09.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.927 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:20:09.927 00:20:09.927 --- 10.0.0.2 ping statistics --- 00:20:09.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.927 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:20:09.927 00:20:09.927 --- 10.0.0.1 ping statistics --- 00:20:09.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.927 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=613249 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 613249 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 613249 ']' 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.927 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.927 [2024-11-20 15:29:58.165236] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:09.927 [2024-11-20 15:29:58.165302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.927 [2024-11-20 15:29:58.267083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.927 [2024-11-20 15:29:58.318849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.927 [2024-11-20 15:29:58.318902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.927 [2024-11-20 15:29:58.318911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.927 [2024-11-20 15:29:58.318918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.927 [2024-11-20 15:29:58.318925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.927 [2024-11-20 15:29:58.319671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.187 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.187 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.187 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.187 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.187 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.187 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.187 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:10.187 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:10.448 true 00:20:10.448 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.448 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:10.709 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:10.709 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:10.709 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:10.709 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.709 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:10.970 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:10.970 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:10.970 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:11.231 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.231 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:11.231 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:11.231 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:11.231 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.231 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:11.492 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:11.492 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:11.492 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:11.752 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.752 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:11.752 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:11.752 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:11.752 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:12.013 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.013 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3kWrHgizK5 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.nYohHqZKDM 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3kWrHgizK5 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.nYohHqZKDM 00:20:12.272 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:12.532 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:12.793 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3kWrHgizK5 00:20:12.793 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3kWrHgizK5 00:20:12.793 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.793 [2024-11-20 15:30:01.697523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.793 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:13.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:13.314 [2024-11-20 15:30:02.030345] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.314 [2024-11-20 15:30:02.030552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.314 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:13.314 malloc0 00:20:13.314 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.574 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3kWrHgizK5 00:20:13.834 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.834 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3kWrHgizK5 00:20:23.990 Initializing NVMe Controllers 00:20:23.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.990 Initialization complete. Launching workers. 00:20:23.990 ======================================================== 00:20:23.990 Latency(us) 00:20:23.990 Device Information : IOPS MiB/s Average min max 00:20:23.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18638.27 72.81 3434.00 1179.86 4111.96 00:20:23.990 ======================================================== 00:20:23.990 Total : 18638.27 72.81 3434.00 1179.86 4111.96 00:20:23.990 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3kWrHgizK5 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3kWrHgizK5 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=616866 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 616866 /var/tmp/bdevperf.sock 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 616866 ']' 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.990 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.990 [2024-11-20 15:30:12.879742] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:23.990 [2024-11-20 15:30:12.879800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616866 ] 00:20:23.990 [2024-11-20 15:30:12.948715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.251 [2024-11-20 15:30:12.983623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.823 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.823 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.823 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3kWrHgizK5 00:20:25.084 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.084 [2024-11-20 15:30:13.983203] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.345 TLSTESTn1 00:20:25.345 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.345 Running I/O for 10 seconds... 00:20:27.229 4166.00 IOPS, 16.27 MiB/s [2024-11-20T14:30:17.571Z] 4596.00 IOPS, 17.95 MiB/s [2024-11-20T14:30:18.512Z] 5002.00 IOPS, 19.54 MiB/s [2024-11-20T14:30:19.451Z] 5275.00 IOPS, 20.61 MiB/s [2024-11-20T14:30:20.393Z] 5383.20 IOPS, 21.03 MiB/s [2024-11-20T14:30:21.333Z] 5547.50 IOPS, 21.67 MiB/s [2024-11-20T14:30:22.275Z] 5658.86 IOPS, 22.10 MiB/s [2024-11-20T14:30:23.215Z] 5737.75 IOPS, 22.41 MiB/s [2024-11-20T14:30:24.597Z] 5817.89 IOPS, 22.73 MiB/s [2024-11-20T14:30:24.597Z] 5895.00 IOPS, 23.03 MiB/s 00:20:35.637 Latency(us) 00:20:35.637 [2024-11-20T14:30:24.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.637 Verification LBA range: start 0x0 length 0x2000 00:20:35.637 TLSTESTn1 : 10.01 5901.08 23.05 0.00 0.00 21658.14 4478.29 51336.53 00:20:35.637 [2024-11-20T14:30:24.597Z] =================================================================================================================== 00:20:35.637 [2024-11-20T14:30:24.597Z] Total : 5901.08 23.05 0.00 0.00 21658.14 4478.29 51336.53 00:20:35.637 { 00:20:35.637 "results": [ 00:20:35.637 { 00:20:35.637 "job": "TLSTESTn1", 00:20:35.637 "core_mask": "0x4", 00:20:35.637 "workload": "verify", 00:20:35.637 "status": "finished", 00:20:35.637 "verify_range": { 00:20:35.637 "start": 0, 00:20:35.637 "length": 8192 00:20:35.637 }, 00:20:35.637 "queue_depth": 128, 00:20:35.637 "io_size": 4096, 00:20:35.637 "runtime": 10.011221, 00:20:35.637 "iops": 5901.078399927442, 00:20:35.637 "mibps": 23.05108749971657, 00:20:35.637 "io_failed": 0, 00:20:35.637 "io_timeout": 0, 00:20:35.637 "avg_latency_us": 21658.13704374517, 00:20:35.637 "min_latency_us": 4478.293333333333, 00:20:35.637 "max_latency_us": 51336.53333333333 00:20:35.637 } 00:20:35.637 ], 00:20:35.637 "core_count": 1 00:20:35.637 } 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 616866 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 616866 ']' 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 616866 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 616866 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 616866' 00:20:35.637 killing process with pid 616866 00:20:35.637 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 616866 00:20:35.637 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.637 00:20:35.637 Latency(us) 00:20:35.637 [2024-11-20T14:30:24.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.638 [2024-11-20T14:30:24.598Z] =================================================================================================================== 00:20:35.638 [2024-11-20T14:30:24.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 616866 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nYohHqZKDM 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nYohHqZKDM 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nYohHqZKDM 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nYohHqZKDM 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=618967 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 618967 /var/tmp/bdevperf.sock 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 618967 ']' 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 [2024-11-20 15:30:24.412698] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:35.638 [2024-11-20 15:30:24.412743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618967 ] 00:20:35.638 [2024-11-20 15:30:24.463372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.638 [2024-11-20 15:30:24.492045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.638 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nYohHqZKDM 00:20:35.898 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.158 [2024-11-20 15:30:24.881145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.158 [2024-11-20 15:30:24.890296] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.158 [2024-11-20 15:30:24.891248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf01bb0 (107): Transport endpoint is not connected 00:20:36.158 [2024-11-20 15:30:24.892244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf01bb0 (9): Bad file descriptor 00:20:36.158 [2024-11-20 15:30:24.893246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:36.158 [2024-11-20 15:30:24.893254] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.158 [2024-11-20 15:30:24.893260] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:36.158 [2024-11-20 15:30:24.893268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:36.158 request: 00:20:36.158 { 00:20:36.158 "name": "TLSTEST", 00:20:36.158 "trtype": "tcp", 00:20:36.158 "traddr": "10.0.0.2", 00:20:36.158 "adrfam": "ipv4", 00:20:36.158 "trsvcid": "4420", 00:20:36.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.158 "prchk_reftag": false, 00:20:36.158 "prchk_guard": false, 00:20:36.158 "hdgst": false, 00:20:36.158 "ddgst": false, 00:20:36.158 "psk": "key0", 00:20:36.158 "allow_unrecognized_csi": false, 00:20:36.158 "method": "bdev_nvme_attach_controller", 00:20:36.158 "req_id": 1 00:20:36.158 } 00:20:36.158 Got JSON-RPC error response 00:20:36.158 response: 00:20:36.158 { 00:20:36.158 "code": -5, 00:20:36.158 "message": "Input/output error" 00:20:36.158 } 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 618967 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 618967 ']' 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 618967 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 618967 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 618967' 00:20:36.158 killing process with pid 618967 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 618967 00:20:36.158 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.158 00:20:36.158 Latency(us) 00:20:36.158 [2024-11-20T14:30:25.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.158 [2024-11-20T14:30:25.118Z] =================================================================================================================== 00:20:36.158 [2024-11-20T14:30:25.118Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.158 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 618967 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3kWrHgizK5 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3kWrHgizK5 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3kWrHgizK5 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.158 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3kWrHgizK5 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=619225 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 619225 /var/tmp/bdevperf.sock 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 619225 ']' 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.159 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.418 [2024-11-20 15:30:25.125972] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:36.418 [2024-11-20 15:30:25.126026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619225 ] 00:20:36.418 [2024-11-20 15:30:25.211330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.418 [2024-11-20 15:30:25.240138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.989 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.989 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.989 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3kWrHgizK5 00:20:37.249 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:37.509 [2024-11-20 15:30:26.258966] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.509 [2024-11-20 15:30:26.263558] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:37.509 [2024-11-20 15:30:26.263578] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:37.509 [2024-11-20 15:30:26.263597] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.509 [2024-11-20 15:30:26.264066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b2bb0 (107): Transport endpoint is not connected 00:20:37.509 [2024-11-20 15:30:26.265060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b2bb0 (9): Bad file descriptor 00:20:37.509 [2024-11-20 15:30:26.266061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:37.509 [2024-11-20 15:30:26.266070] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.509 [2024-11-20 15:30:26.266076] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:37.509 [2024-11-20 15:30:26.266085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:37.509 request: 00:20:37.509 { 00:20:37.509 "name": "TLSTEST", 00:20:37.509 "trtype": "tcp", 00:20:37.509 "traddr": "10.0.0.2", 00:20:37.509 "adrfam": "ipv4", 00:20:37.509 "trsvcid": "4420", 00:20:37.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.509 "prchk_reftag": false, 00:20:37.509 "prchk_guard": false, 00:20:37.509 "hdgst": false, 00:20:37.509 "ddgst": false, 00:20:37.509 "psk": "key0", 00:20:37.509 "allow_unrecognized_csi": false, 00:20:37.509 "method": "bdev_nvme_attach_controller", 00:20:37.509 "req_id": 1 00:20:37.509 } 00:20:37.509 Got JSON-RPC error response 00:20:37.509 response: 00:20:37.509 { 00:20:37.509 "code": -5, 00:20:37.509 "message": "Input/output error" 00:20:37.509 } 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 619225 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 619225 ']' 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 619225 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619225 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619225' 00:20:37.509 killing process with pid 619225 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 619225 00:20:37.509 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.509 00:20:37.509 Latency(us) 00:20:37.509 [2024-11-20T14:30:26.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.509 [2024-11-20T14:30:26.469Z] =================================================================================================================== 00:20:37.509 [2024-11-20T14:30:26.469Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 619225 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:37.509 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3kWrHgizK5 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3kWrHgizK5 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3kWrHgizK5 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3kWrHgizK5 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=619472 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 619472 /var/tmp/bdevperf.sock 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 619472 ']' 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.510 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.770 [2024-11-20 15:30:26.520850] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:37.770 [2024-11-20 15:30:26.520908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619472 ] 00:20:37.770 [2024-11-20 15:30:26.606969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.770 [2024-11-20 15:30:26.635018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.711 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.711 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.711 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3kWrHgizK5 00:20:38.711 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:38.711 [2024-11-20 15:30:27.641614] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.711 [2024-11-20 15:30:27.646903] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:38.711 [2024-11-20 15:30:27.646921] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:38.711 [2024-11-20 15:30:27.646939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.711 [2024-11-20 15:30:27.647809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1637bb0 (107): Transport endpoint is not connected 00:20:38.711 [2024-11-20 15:30:27.648806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1637bb0 (9): Bad file descriptor 00:20:38.711 [2024-11-20 15:30:27.649807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:38.711 [2024-11-20 15:30:27.649816] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.711 [2024-11-20 15:30:27.649822] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:38.711 [2024-11-20 15:30:27.649830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:38.711 request: 00:20:38.711 { 00:20:38.711 "name": "TLSTEST", 00:20:38.711 "trtype": "tcp", 00:20:38.711 "traddr": "10.0.0.2", 00:20:38.711 "adrfam": "ipv4", 00:20:38.711 "trsvcid": "4420", 00:20:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.711 "prchk_reftag": false, 00:20:38.711 "prchk_guard": false, 00:20:38.711 "hdgst": false, 00:20:38.711 "ddgst": false, 00:20:38.711 "psk": "key0", 00:20:38.711 "allow_unrecognized_csi": false, 00:20:38.711 "method": "bdev_nvme_attach_controller", 00:20:38.711 "req_id": 1 00:20:38.711 } 00:20:38.711 Got JSON-RPC error response 00:20:38.711 response: 00:20:38.711 { 00:20:38.711 "code": -5, 00:20:38.711 "message": "Input/output error" 00:20:38.711 } 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 619472 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 619472 ']' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 619472 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619472 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619472' 00:20:38.973 killing process with pid 619472 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 619472 00:20:38.973 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.973 00:20:38.973 Latency(us) 00:20:38.973 [2024-11-20T14:30:27.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.973 [2024-11-20T14:30:27.933Z] =================================================================================================================== 00:20:38.973 [2024-11-20T14:30:27.933Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 619472 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=619644 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 619644 /var/tmp/bdevperf.sock 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 619644 ']' 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.973 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.973 [2024-11-20 15:30:27.893008] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:38.973 [2024-11-20 15:30:27.893063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619644 ] 00:20:39.233 [2024-11-20 15:30:27.978108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.233 [2024-11-20 15:30:28.006248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.805 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.805 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.805 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:40.064 [2024-11-20 15:30:28.844371] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:40.064 [2024-11-20 15:30:28.844397] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:40.064 request: 00:20:40.064 { 00:20:40.064 "name": "key0", 00:20:40.065 "path": "", 00:20:40.065 "method": "keyring_file_add_key", 00:20:40.065 "req_id": 1 00:20:40.065 } 00:20:40.065 Got JSON-RPC error response 00:20:40.065 response: 00:20:40.065 { 00:20:40.065 "code": -1, 00:20:40.065 "message": "Operation not permitted" 00:20:40.065 } 00:20:40.065 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.065 [2024-11-20 15:30:29.016888] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.065 [2024-11-20 15:30:29.016914] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:40.065 request: 00:20:40.065 { 00:20:40.065 "name": "TLSTEST", 00:20:40.065 "trtype": "tcp", 00:20:40.065 "traddr": "10.0.0.2", 00:20:40.065 "adrfam": "ipv4", 00:20:40.065 "trsvcid": "4420", 00:20:40.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.065 "prchk_reftag": false, 00:20:40.065 "prchk_guard": false, 00:20:40.065 "hdgst": false, 00:20:40.065 "ddgst": false, 00:20:40.065 "psk": "key0", 00:20:40.065 "allow_unrecognized_csi": false, 00:20:40.065 "method": "bdev_nvme_attach_controller", 00:20:40.065 "req_id": 1 00:20:40.065 } 00:20:40.065 Got JSON-RPC error response 00:20:40.065 response: 00:20:40.065 { 00:20:40.065 "code": -126, 00:20:40.065 "message": "Required key not available" 00:20:40.065 } 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 619644 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 619644 ']' 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 619644 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619644 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619644' 00:20:40.330 killing process with pid 619644 00:20:40.330 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 619644 00:20:40.330 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.330 00:20:40.330 Latency(us) 00:20:40.330 [2024-11-20T14:30:29.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.331 [2024-11-20T14:30:29.291Z] =================================================================================================================== 00:20:40.331 [2024-11-20T14:30:29.291Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 619644 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 613249 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 613249 ']' 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 613249 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 613249 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 613249' 00:20:40.331 killing process with pid 613249 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 613249 00:20:40.331 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 613249 00:20:40.593 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:40.593 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:40.593 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.69cw9NEbjJ 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.69cw9NEbjJ 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=619954 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 619954 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 619954 ']' 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.594 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.594 [2024-11-20 15:30:29.490333] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:40.594 [2024-11-20 15:30:29.490397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.853 [2024-11-20 15:30:29.582258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.853 [2024-11-20 15:30:29.614338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.853 [2024-11-20 15:30:29.614369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.853 [2024-11-20 15:30:29.614375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.853 [2024-11-20 15:30:29.614380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.853 [2024-11-20 15:30:29.614385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.853 [2024-11-20 15:30:29.614842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.69cw9NEbjJ 00:20:41.422 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.69cw9NEbjJ 00:20:41.423 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.682 [2024-11-20 15:30:30.468041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.682 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.943 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.943 [2024-11-20 15:30:30.784826] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.943 [2024-11-20 15:30:30.785031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.943 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.202 malloc0 00:20:42.202 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.202 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:20:42.462 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.722 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.69cw9NEbjJ 00:20:42.722 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.722 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.722 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.722 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.69cw9NEbjJ 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=620413 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 620413 /var/tmp/bdevperf.sock 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 620413 ']' 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.723 [2024-11-20 15:30:31.472713] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:42.723 [2024-11-20 15:30:31.472755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620413 ] 00:20:42.723 [2024-11-20 15:30:31.522292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.723 [2024-11-20 15:30:31.551473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:42.723 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:20:42.984 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.984 [2024-11-20 15:30:31.941023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.243 TLSTESTn1 00:20:43.243 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:43.243 Running I/O for 10 seconds... 00:20:45.566 5164.00 IOPS, 20.17 MiB/s [2024-11-20T14:30:35.468Z] 5296.00 IOPS, 20.69 MiB/s [2024-11-20T14:30:36.409Z] 5366.67 IOPS, 20.96 MiB/s [2024-11-20T14:30:37.353Z] 5619.50 IOPS, 21.95 MiB/s [2024-11-20T14:30:38.295Z] 5607.00 IOPS, 21.90 MiB/s [2024-11-20T14:30:39.237Z] 5516.50 IOPS, 21.55 MiB/s [2024-11-20T14:30:40.179Z] 5515.14 IOPS, 21.54 MiB/s [2024-11-20T14:30:41.166Z] 5571.88 IOPS, 21.77 MiB/s [2024-11-20T14:30:42.181Z] 5466.00 IOPS, 21.35 MiB/s [2024-11-20T14:30:42.462Z] 5445.60 IOPS, 21.27 MiB/s 00:20:53.502 Latency(us) 00:20:53.502 [2024-11-20T14:30:42.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.502 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:53.502 Verification LBA range: start 0x0 length 0x2000 00:20:53.502 TLSTESTn1 : 10.01 5450.75 21.29 0.00 0.00 23451.70 5597.87 223696.21 00:20:53.502 [2024-11-20T14:30:42.462Z] =================================================================================================================== 00:20:53.502 [2024-11-20T14:30:42.462Z] Total : 5450.75 21.29 0.00 0.00 23451.70 5597.87 223696.21 00:20:53.502 { 00:20:53.502 "results": [ 00:20:53.502 { 00:20:53.502 "job": "TLSTESTn1", 00:20:53.502 "core_mask": "0x4", 00:20:53.502 "workload": "verify", 00:20:53.502 "status": "finished", 00:20:53.502 "verify_range": { 00:20:53.502 "start": 0, 00:20:53.502 "length": 8192 00:20:53.502 }, 00:20:53.502 "queue_depth": 128, 00:20:53.502 "io_size": 4096, 00:20:53.502 "runtime": 10.013672, 00:20:53.502 "iops": 5450.7477376930265, 00:20:53.502 "mibps": 21.291983350363385, 00:20:53.502 "io_failed": 0, 00:20:53.502 "io_timeout": 0, 00:20:53.502 "avg_latency_us": 23451.698954233998, 00:20:53.502 "min_latency_us": 5597.866666666667, 00:20:53.502 "max_latency_us": 223696.21333333335 00:20:53.502 } 00:20:53.503 ], 00:20:53.503 "core_count": 1 00:20:53.503 } 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 620413 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 620413 ']' 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 620413 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 620413 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 620413' 00:20:53.503 killing process with pid 620413 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 620413 00:20:53.503 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.503 00:20:53.503 Latency(us) 00:20:53.503 [2024-11-20T14:30:42.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.503 [2024-11-20T14:30:42.463Z] =================================================================================================================== 00:20:53.503 [2024-11-20T14:30:42.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 620413 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.69cw9NEbjJ 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.69cw9NEbjJ 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.69cw9NEbjJ 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.69cw9NEbjJ 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.69cw9NEbjJ 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=622645 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 622645 /var/tmp/bdevperf.sock 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 622645 ']' 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.503 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.503 [2024-11-20 15:30:42.379168] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:53.503 [2024-11-20 15:30:42.379212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622645 ] 00:20:53.503 [2024-11-20 15:30:42.429493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.503 [2024-11-20 15:30:42.457993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.763 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.763 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:53.763 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:20:53.763 [2024-11-20 15:30:42.678635] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.69cw9NEbjJ': 0100666 00:20:53.763 [2024-11-20 15:30:42.678656] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:53.763 request: 00:20:53.763 { 00:20:53.763 "name": "key0", 00:20:53.763 "path": "/tmp/tmp.69cw9NEbjJ", 00:20:53.763 "method": "keyring_file_add_key", 00:20:53.763 "req_id": 1 00:20:53.763 } 00:20:53.763 Got JSON-RPC error response 00:20:53.763 response: 00:20:53.763 { 00:20:53.763 "code": -1, 00:20:53.763 "message": "Operation not permitted" 00:20:53.763 } 00:20:53.763 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.023 [2024-11-20 15:30:42.851133] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.023 [2024-11-20 15:30:42.851160] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:54.023 request: 00:20:54.023 { 00:20:54.023 "name": "TLSTEST", 00:20:54.023 "trtype": "tcp", 00:20:54.023 "traddr": "10.0.0.2", 00:20:54.023 "adrfam": "ipv4", 00:20:54.023 "trsvcid": "4420", 00:20:54.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.023 "prchk_reftag": false, 00:20:54.023 "prchk_guard": false, 00:20:54.023 "hdgst": false, 00:20:54.023 "ddgst": false, 00:20:54.023 "psk": "key0", 00:20:54.023 "allow_unrecognized_csi": false, 00:20:54.023 "method": "bdev_nvme_attach_controller", 00:20:54.023 "req_id": 1 00:20:54.023 } 00:20:54.023 Got JSON-RPC error response 00:20:54.023 response: 00:20:54.023 { 00:20:54.023 "code": -126, 00:20:54.023 "message": "Required key not available" 00:20:54.023 } 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 622645 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 622645 ']' 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 622645 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622645 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:54.023 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622645' 00:20:54.024 killing process with pid 622645 00:20:54.024 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 622645 00:20:54.024 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.024 00:20:54.024 Latency(us) 00:20:54.024 [2024-11-20T14:30:42.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.024 [2024-11-20T14:30:42.984Z] =================================================================================================================== 00:20:54.024 [2024-11-20T14:30:42.984Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:54.024 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 622645 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 619954 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 619954 ']' 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 619954 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619954 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619954' 00:20:54.284 killing process with pid 619954 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 619954 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 619954 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=622676 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 622676 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 622676 ']' 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.284 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.544 [2024-11-20 15:30:43.261218] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:54.544 [2024-11-20 15:30:43.261278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.544 [2024-11-20 15:30:43.350059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.544 [2024-11-20 15:30:43.379724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.544 [2024-11-20 15:30:43.379750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.544 [2024-11-20 15:30:43.379756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.544 [2024-11-20 15:30:43.379761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.544 [2024-11-20 15:30:43.379766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.544 [2024-11-20 15:30:43.380222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.115 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.116 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:55.116 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:55.116 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.116 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.69cw9NEbjJ 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.69cw9NEbjJ 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.69cw9NEbjJ 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.69cw9NEbjJ 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.376 [2024-11-20 15:30:44.239432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.376 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.636 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.636 [2024-11-20 15:30:44.544185] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.636 [2024-11-20 15:30:44.544383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.636 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.897 malloc0 00:20:55.897 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:56.156 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:20:56.156 [2024-11-20 15:30:45.031251] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.69cw9NEbjJ': 0100666 00:20:56.156 [2024-11-20 15:30:45.031270] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:56.156 request: 00:20:56.156 { 00:20:56.156 "name": "key0", 00:20:56.156 "path": "/tmp/tmp.69cw9NEbjJ", 00:20:56.156 "method": "keyring_file_add_key", 00:20:56.156 "req_id": 1 00:20:56.156 } 00:20:56.156 Got JSON-RPC error response 00:20:56.156 response: 00:20:56.156 { 00:20:56.156 "code": -1, 00:20:56.156 "message": "Operation not permitted" 00:20:56.156 } 00:20:56.156 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:56.417 [2024-11-20 15:30:45.183640] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:56.417 [2024-11-20 15:30:45.183669] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:56.417 request: 00:20:56.417 { 00:20:56.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.417 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.417 "psk": "key0", 00:20:56.417 "method": "nvmf_subsystem_add_host", 00:20:56.417 "req_id": 1 00:20:56.417 } 00:20:56.417 Got JSON-RPC error response 00:20:56.417 response: 00:20:56.417 { 00:20:56.417 "code": -32603, 00:20:56.417 "message": "Internal error" 00:20:56.417 } 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 622676 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 622676 ']' 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 622676 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622676 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622676' 00:20:56.417 killing process with pid 622676 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 622676 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 622676 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.69cw9NEbjJ 00:20:56.417 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:56.677 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.677 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.677 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.677 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=623154 00:20:56.677 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 623154 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 623154 ']' 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.678 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.678 [2024-11-20 15:30:45.441835] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:56.678 [2024-11-20 15:30:45.441895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.678 [2024-11-20 15:30:45.533000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.678 [2024-11-20 15:30:45.563115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.678 [2024-11-20 15:30:45.563142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.678 [2024-11-20 15:30:45.563148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.678 [2024-11-20 15:30:45.563152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.678 [2024-11-20 15:30:45.563156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.678 [2024-11-20 15:30:45.563610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.69cw9NEbjJ 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.69cw9NEbjJ 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.619 [2024-11-20 15:30:46.411236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.619 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:57.879 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:57.879 [2024-11-20 15:30:46.772122] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.879 [2024-11-20 15:30:46.772331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.879 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.139 malloc0 00:20:58.139 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:58.400 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:20:58.400 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=623709 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 623709 /var/tmp/bdevperf.sock 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 623709 ']' 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.661 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.661 [2024-11-20 15:30:47.562944] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:20:58.661 [2024-11-20 15:30:47.563000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623709 ] 00:20:58.921 [2024-11-20 15:30:47.644380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.921 [2024-11-20 15:30:47.673123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.492 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.492 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.492 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:20:59.752 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.752 [2024-11-20 15:30:48.699957] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.012 TLSTESTn1 00:21:00.012 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:00.272 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:00.272 "subsystems": [ 00:21:00.272 { 00:21:00.272 "subsystem": "keyring", 00:21:00.272 "config": [ 00:21:00.272 { 00:21:00.272 "method": "keyring_file_add_key", 00:21:00.272 "params": { 00:21:00.272 "name": "key0", 00:21:00.272 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:00.272 } 00:21:00.272 } 00:21:00.272 ] 00:21:00.272 }, 00:21:00.272 { 00:21:00.272 "subsystem": "iobuf", 00:21:00.272 "config": [ 00:21:00.272 { 00:21:00.272 "method": "iobuf_set_options", 00:21:00.272 "params": { 00:21:00.272 "small_pool_count": 8192, 00:21:00.272 "large_pool_count": 1024, 00:21:00.272 "small_bufsize": 8192, 00:21:00.272 "large_bufsize": 135168, 00:21:00.272 "enable_numa": false 00:21:00.272 } 00:21:00.272 } 00:21:00.272 ] 00:21:00.272 }, 00:21:00.272 { 00:21:00.272 "subsystem": "sock", 00:21:00.272 "config": [ 00:21:00.272 { 00:21:00.272 "method": "sock_set_default_impl", 00:21:00.272 "params": { 00:21:00.272 "impl_name": "posix" 00:21:00.272 } 00:21:00.272 }, 00:21:00.272 { 00:21:00.272 "method": "sock_impl_set_options", 00:21:00.272 "params": { 00:21:00.272 "impl_name": "ssl", 00:21:00.272 "recv_buf_size": 4096, 00:21:00.272 "send_buf_size": 4096, 00:21:00.272 "enable_recv_pipe": true, 00:21:00.272 "enable_quickack": false, 00:21:00.272 "enable_placement_id": 0, 00:21:00.272 "enable_zerocopy_send_server": true, 00:21:00.272 "enable_zerocopy_send_client": false, 00:21:00.272 "zerocopy_threshold": 0, 00:21:00.272 "tls_version": 0, 00:21:00.272 "enable_ktls": false 00:21:00.272 } 00:21:00.272 }, 00:21:00.272 { 00:21:00.272 "method": "sock_impl_set_options", 00:21:00.272 "params": { 00:21:00.272 "impl_name": "posix", 00:21:00.272 "recv_buf_size": 2097152, 00:21:00.272 "send_buf_size": 2097152, 00:21:00.272 "enable_recv_pipe": true, 00:21:00.272 "enable_quickack": false, 00:21:00.272 "enable_placement_id": 0, 00:21:00.272 "enable_zerocopy_send_server": true, 00:21:00.272 "enable_zerocopy_send_client": false, 00:21:00.273 "zerocopy_threshold": 0, 00:21:00.273 "tls_version": 0, 00:21:00.273 "enable_ktls": false 00:21:00.273 } 00:21:00.273 } 00:21:00.273 ] 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "subsystem": "vmd", 00:21:00.273 "config": [] 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "subsystem": "accel", 00:21:00.273 "config": [ 00:21:00.273 { 00:21:00.273 "method": "accel_set_options", 00:21:00.273 "params": { 00:21:00.273 "small_cache_size": 128, 00:21:00.273 "large_cache_size": 16, 00:21:00.273 "task_count": 2048, 00:21:00.273 "sequence_count": 2048, 00:21:00.273 "buf_count": 2048 00:21:00.273 } 00:21:00.273 } 00:21:00.273 ] 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "subsystem": "bdev", 00:21:00.273 "config": [ 00:21:00.273 { 00:21:00.273 "method": "bdev_set_options", 00:21:00.273 "params": { 00:21:00.273 "bdev_io_pool_size": 65535, 00:21:00.273 "bdev_io_cache_size": 256, 00:21:00.273 "bdev_auto_examine": true, 00:21:00.273 "iobuf_small_cache_size": 128, 00:21:00.273 "iobuf_large_cache_size": 16 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "bdev_raid_set_options", 00:21:00.273 "params": { 00:21:00.273 "process_window_size_kb": 1024, 00:21:00.273 "process_max_bandwidth_mb_sec": 0 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "bdev_iscsi_set_options", 00:21:00.273 "params": { 00:21:00.273 "timeout_sec": 30 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "bdev_nvme_set_options", 00:21:00.273 "params": { 00:21:00.273 "action_on_timeout": "none", 00:21:00.273 "timeout_us": 0, 00:21:00.273 "timeout_admin_us": 0, 00:21:00.273 "keep_alive_timeout_ms": 10000, 00:21:00.273 "arbitration_burst": 0, 00:21:00.273 "low_priority_weight": 0, 00:21:00.273 "medium_priority_weight": 0, 00:21:00.273 "high_priority_weight": 0, 00:21:00.273 "nvme_adminq_poll_period_us": 10000, 00:21:00.273 "nvme_ioq_poll_period_us": 0, 00:21:00.273 "io_queue_requests": 0, 00:21:00.273 "delay_cmd_submit": true, 00:21:00.273 "transport_retry_count": 4, 00:21:00.273 "bdev_retry_count": 3, 00:21:00.273 "transport_ack_timeout": 0, 00:21:00.273 "ctrlr_loss_timeout_sec": 0, 00:21:00.273 "reconnect_delay_sec": 0, 00:21:00.273 "fast_io_fail_timeout_sec": 0, 00:21:00.273 "disable_auto_failback": false, 00:21:00.273 "generate_uuids": false, 00:21:00.273 "transport_tos": 0, 00:21:00.273 "nvme_error_stat": false, 00:21:00.273 "rdma_srq_size": 0, 00:21:00.273 "io_path_stat": false, 00:21:00.273 "allow_accel_sequence": false, 00:21:00.273 "rdma_max_cq_size": 0, 00:21:00.273 "rdma_cm_event_timeout_ms": 0, 00:21:00.273 "dhchap_digests": [ 00:21:00.273 "sha256", 00:21:00.273 "sha384", 00:21:00.273 "sha512" 00:21:00.273 ], 00:21:00.273 "dhchap_dhgroups": [ 00:21:00.273 "null", 00:21:00.273 "ffdhe2048", 00:21:00.273 "ffdhe3072", 00:21:00.273 "ffdhe4096", 00:21:00.273 "ffdhe6144", 00:21:00.273 "ffdhe8192" 00:21:00.273 ] 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "bdev_nvme_set_hotplug", 00:21:00.273 "params": { 00:21:00.273 "period_us": 100000, 00:21:00.273 "enable": false 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "bdev_malloc_create", 00:21:00.273 "params": { 00:21:00.273 "name": "malloc0", 00:21:00.273 "num_blocks": 8192, 00:21:00.273 "block_size": 4096, 00:21:00.273 "physical_block_size": 4096, 00:21:00.273 "uuid": "bcce33bc-b022-4193-9f7a-6e7cffb15e3d", 00:21:00.273 "optimal_io_boundary": 0, 00:21:00.273 "md_size": 0, 00:21:00.273 "dif_type": 0, 00:21:00.273 "dif_is_head_of_md": false, 00:21:00.273 "dif_pi_format": 0 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "bdev_wait_for_examine" 00:21:00.273 } 00:21:00.273 ] 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "subsystem": "nbd", 00:21:00.273 "config": [] 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "subsystem": "scheduler", 00:21:00.273 "config": [ 00:21:00.273 { 00:21:00.273 "method": "framework_set_scheduler", 00:21:00.273 "params": { 00:21:00.273 "name": "static" 00:21:00.273 } 00:21:00.273 } 00:21:00.273 ] 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "subsystem": "nvmf", 00:21:00.273 "config": [ 00:21:00.273 { 00:21:00.273 "method": "nvmf_set_config", 00:21:00.273 "params": { 00:21:00.273 "discovery_filter": "match_any", 00:21:00.273 "admin_cmd_passthru": { 00:21:00.273 "identify_ctrlr": false 00:21:00.273 }, 00:21:00.273 "dhchap_digests": [ 00:21:00.273 "sha256", 00:21:00.273 "sha384", 00:21:00.273 "sha512" 00:21:00.273 ], 00:21:00.273 "dhchap_dhgroups": [ 00:21:00.273 "null", 00:21:00.273 "ffdhe2048", 00:21:00.273 "ffdhe3072", 00:21:00.273 "ffdhe4096", 00:21:00.273 "ffdhe6144", 00:21:00.273 "ffdhe8192" 00:21:00.273 ] 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "nvmf_set_max_subsystems", 00:21:00.273 "params": { 00:21:00.273 "max_subsystems": 1024 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "nvmf_set_crdt", 00:21:00.273 "params": { 00:21:00.273 "crdt1": 0, 00:21:00.273 "crdt2": 0, 00:21:00.273 "crdt3": 0 00:21:00.273 } 00:21:00.273 }, 00:21:00.273 { 00:21:00.273 "method": "nvmf_create_transport", 00:21:00.273 "params": { 00:21:00.273 "trtype": "TCP", 00:21:00.273 "max_queue_depth": 128, 00:21:00.273 "max_io_qpairs_per_ctrlr": 127, 00:21:00.273 "in_capsule_data_size": 4096, 00:21:00.273 "max_io_size": 131072, 00:21:00.274 "io_unit_size": 131072, 00:21:00.274 "max_aq_depth": 128, 00:21:00.274 "num_shared_buffers": 511, 00:21:00.274 "buf_cache_size": 4294967295, 00:21:00.274 "dif_insert_or_strip": false, 00:21:00.274 "zcopy": false, 00:21:00.274 "c2h_success": false, 00:21:00.274 "sock_priority": 0, 00:21:00.274 "abort_timeout_sec": 1, 00:21:00.274 "ack_timeout": 0, 00:21:00.274 "data_wr_pool_size": 0 00:21:00.274 } 00:21:00.274 }, 00:21:00.274 { 00:21:00.274 "method": "nvmf_create_subsystem", 00:21:00.274 "params": { 00:21:00.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.274 "allow_any_host": false, 00:21:00.274 "serial_number": "SPDK00000000000001", 00:21:00.274 "model_number": "SPDK bdev Controller", 00:21:00.274 "max_namespaces": 10, 00:21:00.274 "min_cntlid": 1, 00:21:00.274 "max_cntlid": 65519, 00:21:00.274 "ana_reporting": false 00:21:00.274 } 00:21:00.274 }, 00:21:00.274 { 00:21:00.274 "method": "nvmf_subsystem_add_host", 00:21:00.274 "params": { 00:21:00.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.274 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.274 "psk": "key0" 00:21:00.274 } 00:21:00.274 }, 00:21:00.274 { 00:21:00.274 "method": "nvmf_subsystem_add_ns", 00:21:00.274 "params": { 00:21:00.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.274 "namespace": { 00:21:00.274 "nsid": 1, 00:21:00.274 "bdev_name": "malloc0", 00:21:00.274 "nguid": "BCCE33BCB02241939F7A6E7CFFB15E3D", 00:21:00.274 "uuid": "bcce33bc-b022-4193-9f7a-6e7cffb15e3d", 00:21:00.274 "no_auto_visible": false 00:21:00.274 } 00:21:00.274 } 00:21:00.274 }, 00:21:00.274 { 00:21:00.274 "method": "nvmf_subsystem_add_listener", 00:21:00.274 "params": { 00:21:00.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.274 "listen_address": { 00:21:00.274 "trtype": "TCP", 00:21:00.274 "adrfam": "IPv4", 00:21:00.274 "traddr": "10.0.0.2", 00:21:00.274 "trsvcid": "4420" 00:21:00.274 }, 00:21:00.274 "secure_channel": true 00:21:00.274 } 00:21:00.274 } 00:21:00.274 ] 00:21:00.274 } 00:21:00.274 ] 00:21:00.274 }' 00:21:00.274 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:00.535 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:00.535 "subsystems": [ 00:21:00.535 { 00:21:00.535 "subsystem": "keyring", 00:21:00.535 "config": [ 00:21:00.535 { 00:21:00.535 "method": "keyring_file_add_key", 00:21:00.535 "params": { 00:21:00.535 "name": "key0", 00:21:00.535 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:00.535 } 00:21:00.535 } 00:21:00.535 ] 00:21:00.535 }, 00:21:00.535 { 00:21:00.535 "subsystem": "iobuf", 00:21:00.535 "config": [ 00:21:00.535 { 00:21:00.535 "method": "iobuf_set_options", 00:21:00.535 "params": { 00:21:00.535 "small_pool_count": 8192, 00:21:00.535 "large_pool_count": 1024, 00:21:00.535 "small_bufsize": 8192, 00:21:00.535 "large_bufsize": 135168, 00:21:00.535 "enable_numa": false 00:21:00.535 } 00:21:00.535 } 00:21:00.535 ] 00:21:00.535 }, 00:21:00.535 { 00:21:00.535 "subsystem": "sock", 00:21:00.535 "config": [ 00:21:00.535 { 00:21:00.535 "method": "sock_set_default_impl", 00:21:00.535 "params": { 00:21:00.535 "impl_name": "posix" 00:21:00.535 } 00:21:00.535 }, 00:21:00.535 { 00:21:00.535 "method": "sock_impl_set_options", 00:21:00.535 "params": { 00:21:00.535 "impl_name": "ssl", 00:21:00.535 "recv_buf_size": 4096, 00:21:00.535 "send_buf_size": 4096, 00:21:00.535 "enable_recv_pipe": true, 00:21:00.535 "enable_quickack": false, 00:21:00.535 "enable_placement_id": 0, 00:21:00.535 "enable_zerocopy_send_server": true, 00:21:00.535 "enable_zerocopy_send_client": false, 00:21:00.535 "zerocopy_threshold": 0, 00:21:00.536 "tls_version": 0, 00:21:00.536 "enable_ktls": false 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "sock_impl_set_options", 00:21:00.536 "params": { 00:21:00.536 "impl_name": "posix", 00:21:00.536 "recv_buf_size": 2097152, 00:21:00.536 "send_buf_size": 2097152, 00:21:00.536 "enable_recv_pipe": true, 00:21:00.536 "enable_quickack": false, 00:21:00.536 "enable_placement_id": 0, 00:21:00.536 "enable_zerocopy_send_server": true, 00:21:00.536 "enable_zerocopy_send_client": false, 00:21:00.536 "zerocopy_threshold": 0, 00:21:00.536 "tls_version": 0, 00:21:00.536 "enable_ktls": false 00:21:00.536 } 00:21:00.536 } 00:21:00.536 ] 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "subsystem": "vmd", 00:21:00.536 "config": [] 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "subsystem": "accel", 00:21:00.536 "config": [ 00:21:00.536 { 00:21:00.536 "method": "accel_set_options", 00:21:00.536 "params": { 00:21:00.536 "small_cache_size": 128, 00:21:00.536 "large_cache_size": 16, 00:21:00.536 "task_count": 2048, 00:21:00.536 "sequence_count": 2048, 00:21:00.536 "buf_count": 2048 00:21:00.536 } 00:21:00.536 } 00:21:00.536 ] 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "subsystem": "bdev", 00:21:00.536 "config": [ 00:21:00.536 { 00:21:00.536 "method": "bdev_set_options", 00:21:00.536 "params": { 00:21:00.536 "bdev_io_pool_size": 65535, 00:21:00.536 "bdev_io_cache_size": 256, 00:21:00.536 "bdev_auto_examine": true, 00:21:00.536 "iobuf_small_cache_size": 128, 00:21:00.536 "iobuf_large_cache_size": 16 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "bdev_raid_set_options", 00:21:00.536 "params": { 00:21:00.536 "process_window_size_kb": 1024, 00:21:00.536 "process_max_bandwidth_mb_sec": 0 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "bdev_iscsi_set_options", 00:21:00.536 "params": { 00:21:00.536 "timeout_sec": 30 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "bdev_nvme_set_options", 00:21:00.536 "params": { 00:21:00.536 "action_on_timeout": "none", 00:21:00.536 "timeout_us": 0, 00:21:00.536 "timeout_admin_us": 0, 00:21:00.536 "keep_alive_timeout_ms": 10000, 00:21:00.536 "arbitration_burst": 0, 00:21:00.536 "low_priority_weight": 0, 00:21:00.536 "medium_priority_weight": 0, 00:21:00.536 "high_priority_weight": 0, 00:21:00.536 "nvme_adminq_poll_period_us": 10000, 00:21:00.536 "nvme_ioq_poll_period_us": 0, 00:21:00.536 "io_queue_requests": 512, 00:21:00.536 "delay_cmd_submit": true, 00:21:00.536 "transport_retry_count": 4, 00:21:00.536 "bdev_retry_count": 3, 00:21:00.536 "transport_ack_timeout": 0, 00:21:00.536 "ctrlr_loss_timeout_sec": 0, 00:21:00.536 "reconnect_delay_sec": 0, 00:21:00.536 "fast_io_fail_timeout_sec": 0, 00:21:00.536 "disable_auto_failback": false, 00:21:00.536 "generate_uuids": false, 00:21:00.536 "transport_tos": 0, 00:21:00.536 "nvme_error_stat": false, 00:21:00.536 "rdma_srq_size": 0, 00:21:00.536 "io_path_stat": false, 00:21:00.536 "allow_accel_sequence": false, 00:21:00.536 "rdma_max_cq_size": 0, 00:21:00.536 "rdma_cm_event_timeout_ms": 0, 00:21:00.536 "dhchap_digests": [ 00:21:00.536 "sha256", 00:21:00.536 "sha384", 00:21:00.536 "sha512" 00:21:00.536 ], 00:21:00.536 "dhchap_dhgroups": [ 00:21:00.536 "null", 00:21:00.536 "ffdhe2048", 00:21:00.536 "ffdhe3072", 00:21:00.536 "ffdhe4096", 00:21:00.536 "ffdhe6144", 00:21:00.536 "ffdhe8192" 00:21:00.536 ] 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "bdev_nvme_attach_controller", 00:21:00.536 "params": { 00:21:00.536 "name": "TLSTEST", 00:21:00.536 "trtype": "TCP", 00:21:00.536 "adrfam": "IPv4", 00:21:00.536 "traddr": "10.0.0.2", 00:21:00.536 "trsvcid": "4420", 00:21:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.536 "prchk_reftag": false, 00:21:00.536 "prchk_guard": false, 00:21:00.536 "ctrlr_loss_timeout_sec": 0, 00:21:00.536 "reconnect_delay_sec": 0, 00:21:00.536 "fast_io_fail_timeout_sec": 0, 00:21:00.536 "psk": "key0", 00:21:00.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.536 "hdgst": false, 00:21:00.536 "ddgst": false, 00:21:00.536 "multipath": "multipath" 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "bdev_nvme_set_hotplug", 00:21:00.536 "params": { 00:21:00.536 "period_us": 100000, 00:21:00.536 "enable": false 00:21:00.536 } 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "method": "bdev_wait_for_examine" 00:21:00.536 } 00:21:00.536 ] 00:21:00.536 }, 00:21:00.536 { 00:21:00.536 "subsystem": "nbd", 00:21:00.536 "config": [] 00:21:00.536 } 00:21:00.536 ] 00:21:00.537 }' 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 623709 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 623709 ']' 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 623709 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 623709 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 623709' 00:21:00.537 killing process with pid 623709 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 623709 00:21:00.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.537 00:21:00.537 Latency(us) 00:21:00.537 [2024-11-20T14:30:49.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.537 [2024-11-20T14:30:49.497Z] =================================================================================================================== 00:21:00.537 [2024-11-20T14:30:49.497Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 623709 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 623154 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 623154 ']' 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 623154 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.537 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 623154 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 623154' 00:21:00.799 killing process with pid 623154 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 623154 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 623154 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.799 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:00.799 "subsystems": [ 00:21:00.799 { 00:21:00.799 "subsystem": "keyring", 00:21:00.799 "config": [ 00:21:00.799 { 00:21:00.799 "method": "keyring_file_add_key", 00:21:00.799 "params": { 00:21:00.799 "name": "key0", 00:21:00.799 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:00.799 } 00:21:00.799 } 00:21:00.799 ] 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "subsystem": "iobuf", 00:21:00.799 "config": [ 00:21:00.799 { 00:21:00.799 "method": "iobuf_set_options", 00:21:00.799 "params": { 00:21:00.799 "small_pool_count": 8192, 00:21:00.799 "large_pool_count": 1024, 00:21:00.799 "small_bufsize": 8192, 00:21:00.799 "large_bufsize": 135168, 00:21:00.799 "enable_numa": false 00:21:00.799 } 00:21:00.799 } 00:21:00.799 ] 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "subsystem": "sock", 00:21:00.799 "config": [ 00:21:00.799 { 00:21:00.799 "method": "sock_set_default_impl", 00:21:00.799 "params": { 00:21:00.799 "impl_name": "posix" 00:21:00.799 } 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "method": "sock_impl_set_options", 00:21:00.799 "params": { 00:21:00.799 "impl_name": "ssl", 00:21:00.799 "recv_buf_size": 4096, 00:21:00.799 "send_buf_size": 4096, 00:21:00.799 "enable_recv_pipe": true, 00:21:00.799 "enable_quickack": false, 00:21:00.799 "enable_placement_id": 0, 00:21:00.799 "enable_zerocopy_send_server": true, 00:21:00.799 "enable_zerocopy_send_client": false, 00:21:00.799 "zerocopy_threshold": 0, 00:21:00.799 "tls_version": 0, 00:21:00.799 "enable_ktls": false 00:21:00.799 } 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "method": "sock_impl_set_options", 00:21:00.799 "params": { 00:21:00.799 "impl_name": "posix", 00:21:00.799 "recv_buf_size": 2097152, 00:21:00.799 "send_buf_size": 2097152, 00:21:00.799 "enable_recv_pipe": true, 00:21:00.799 "enable_quickack": false, 00:21:00.799 "enable_placement_id": 0, 00:21:00.799 "enable_zerocopy_send_server": true, 00:21:00.799 "enable_zerocopy_send_client": false, 00:21:00.799 "zerocopy_threshold": 0, 00:21:00.799 "tls_version": 0, 00:21:00.799 "enable_ktls": false 00:21:00.799 } 00:21:00.799 } 00:21:00.799 ] 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "subsystem": "vmd", 00:21:00.799 "config": [] 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "subsystem": "accel", 00:21:00.799 "config": [ 00:21:00.799 { 00:21:00.799 "method": "accel_set_options", 00:21:00.799 "params": { 00:21:00.799 "small_cache_size": 128, 00:21:00.799 "large_cache_size": 16, 00:21:00.799 "task_count": 2048, 00:21:00.799 "sequence_count": 2048, 00:21:00.799 "buf_count": 2048 00:21:00.799 } 00:21:00.799 } 00:21:00.799 ] 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "subsystem": "bdev", 00:21:00.799 "config": [ 00:21:00.799 { 00:21:00.799 "method": "bdev_set_options", 00:21:00.799 "params": { 00:21:00.799 "bdev_io_pool_size": 65535, 00:21:00.799 "bdev_io_cache_size": 256, 00:21:00.799 "bdev_auto_examine": true, 00:21:00.799 "iobuf_small_cache_size": 128, 00:21:00.799 "iobuf_large_cache_size": 16 00:21:00.799 } 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "method": "bdev_raid_set_options", 00:21:00.799 "params": { 00:21:00.799 "process_window_size_kb": 1024, 00:21:00.799 "process_max_bandwidth_mb_sec": 0 00:21:00.799 } 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "method": "bdev_iscsi_set_options", 00:21:00.799 "params": { 00:21:00.799 "timeout_sec": 30 00:21:00.799 } 00:21:00.799 }, 00:21:00.799 { 00:21:00.799 "method": "bdev_nvme_set_options", 00:21:00.799 "params": { 00:21:00.799 "action_on_timeout": "none", 00:21:00.799 "timeout_us": 0, 00:21:00.799 "timeout_admin_us": 0, 00:21:00.799 "keep_alive_timeout_ms": 10000, 00:21:00.799 "arbitration_burst": 0, 00:21:00.799 "low_priority_weight": 0, 00:21:00.799 "medium_priority_weight": 0, 00:21:00.799 "high_priority_weight": 0, 00:21:00.799 "nvme_adminq_poll_period_us": 10000, 00:21:00.799 "nvme_ioq_poll_period_us": 0, 00:21:00.799 "io_queue_requests": 0, 00:21:00.799 "delay_cmd_submit": true, 00:21:00.799 "transport_retry_count": 4, 00:21:00.799 "bdev_retry_count": 3, 00:21:00.799 "transport_ack_timeout": 0, 00:21:00.799 "ctrlr_loss_timeout_sec": 0, 00:21:00.799 "reconnect_delay_sec": 0, 00:21:00.799 "fast_io_fail_timeout_sec": 0, 00:21:00.799 "disable_auto_failback": false, 00:21:00.799 "generate_uuids": false, 00:21:00.799 "transport_tos": 0, 00:21:00.799 "nvme_error_stat": false, 00:21:00.799 "rdma_srq_size": 0, 00:21:00.799 "io_path_stat": false, 00:21:00.799 "allow_accel_sequence": false, 00:21:00.800 "rdma_max_cq_size": 0, 00:21:00.800 "rdma_cm_event_timeout_ms": 0, 00:21:00.800 "dhchap_digests": [ 00:21:00.800 "sha256", 00:21:00.800 "sha384", 00:21:00.800 "sha512" 00:21:00.800 ], 00:21:00.800 "dhchap_dhgroups": [ 00:21:00.800 "null", 00:21:00.800 "ffdhe2048", 00:21:00.800 "ffdhe3072", 00:21:00.800 "ffdhe4096", 00:21:00.800 "ffdhe6144", 00:21:00.800 "ffdhe8192" 00:21:00.800 ] 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "bdev_nvme_set_hotplug", 00:21:00.800 "params": { 00:21:00.800 "period_us": 100000, 00:21:00.800 "enable": false 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "bdev_malloc_create", 00:21:00.800 "params": { 00:21:00.800 "name": "malloc0", 00:21:00.800 "num_blocks": 8192, 00:21:00.800 "block_size": 4096, 00:21:00.800 "physical_block_size": 4096, 00:21:00.800 "uuid": "bcce33bc-b022-4193-9f7a-6e7cffb15e3d", 00:21:00.800 "optimal_io_boundary": 0, 00:21:00.800 "md_size": 0, 00:21:00.800 "dif_type": 0, 00:21:00.800 "dif_is_head_of_md": false, 00:21:00.800 "dif_pi_format": 0 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "bdev_wait_for_examine" 00:21:00.800 } 00:21:00.800 ] 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "subsystem": "nbd", 00:21:00.800 "config": [] 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "subsystem": "scheduler", 00:21:00.800 "config": [ 00:21:00.800 { 00:21:00.800 "method": "framework_set_scheduler", 00:21:00.800 "params": { 00:21:00.800 "name": "static" 00:21:00.800 } 00:21:00.800 } 00:21:00.800 ] 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "subsystem": "nvmf", 00:21:00.800 "config": [ 00:21:00.800 { 00:21:00.800 "method": "nvmf_set_config", 00:21:00.800 "params": { 00:21:00.800 "discovery_filter": "match_any", 00:21:00.800 "admin_cmd_passthru": { 00:21:00.800 "identify_ctrlr": false 00:21:00.800 }, 00:21:00.800 "dhchap_digests": [ 00:21:00.800 "sha256", 00:21:00.800 "sha384", 00:21:00.800 "sha512" 00:21:00.800 ], 00:21:00.800 "dhchap_dhgroups": [ 00:21:00.800 "null", 00:21:00.800 "ffdhe2048", 00:21:00.800 "ffdhe3072", 00:21:00.800 "ffdhe4096", 00:21:00.800 "ffdhe6144", 00:21:00.800 "ffdhe8192" 00:21:00.800 ] 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_set_max_subsystems", 00:21:00.800 "params": { 00:21:00.800 "max_subsystems": 1024 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_set_crdt", 00:21:00.800 "params": { 00:21:00.800 "crdt1": 0, 00:21:00.800 "crdt2": 0, 00:21:00.800 "crdt3": 0 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_create_transport", 00:21:00.800 "params": { 00:21:00.800 "trtype": "TCP", 00:21:00.800 "max_queue_depth": 128, 00:21:00.800 "max_io_qpairs_per_ctrlr": 127, 00:21:00.800 "in_capsule_data_size": 4096, 00:21:00.800 "max_io_size": 131072, 00:21:00.800 "io_unit_size": 131072, 00:21:00.800 "max_aq_depth": 128, 00:21:00.800 "num_shared_buffers": 511, 00:21:00.800 "buf_cache_size": 4294967295, 00:21:00.800 "dif_insert_or_strip": false, 00:21:00.800 "zcopy": false, 00:21:00.800 "c2h_success": false, 00:21:00.800 "sock_priority": 0, 00:21:00.800 "abort_timeout_sec": 1, 00:21:00.800 "ack_timeout": 0, 00:21:00.800 "data_wr_pool_size": 0 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_create_subsystem", 00:21:00.800 "params": { 00:21:00.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.800 "allow_any_host": false, 00:21:00.800 "serial_number": "SPDK00000000000001", 00:21:00.800 "model_number": "SPDK bdev Controller", 00:21:00.800 "max_namespaces": 10, 00:21:00.800 "min_cntlid": 1, 00:21:00.800 "max_cntlid": 65519, 00:21:00.800 "ana_reporting": false 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_subsystem_add_host", 00:21:00.800 "params": { 00:21:00.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.800 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.800 "psk": "key0" 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_subsystem_add_ns", 00:21:00.800 "params": { 00:21:00.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.800 "namespace": { 00:21:00.800 "nsid": 1, 00:21:00.800 "bdev_name": "malloc0", 00:21:00.800 "nguid": "BCCE33BCB02241939F7A6E7CFFB15E3D", 00:21:00.800 "uuid": "bcce33bc-b022-4193-9f7a-6e7cffb15e3d", 00:21:00.800 "no_auto_visible": false 00:21:00.800 } 00:21:00.800 } 00:21:00.800 }, 00:21:00.800 { 00:21:00.800 "method": "nvmf_subsystem_add_listener", 00:21:00.800 "params": { 00:21:00.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.800 "listen_address": { 00:21:00.800 "trtype": "TCP", 00:21:00.800 "adrfam": "IPv4", 00:21:00.800 "traddr": "10.0.0.2", 00:21:00.800 "trsvcid": "4420" 00:21:00.800 }, 00:21:00.800 "secure_channel": true 00:21:00.800 } 00:21:00.800 } 00:21:00.800 ] 00:21:00.800 } 00:21:00.800 ] 00:21:00.800 }' 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=624088 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 624088 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 624088 ']' 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.800 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.800 [2024-11-20 15:30:49.704654] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:00.800 [2024-11-20 15:30:49.704710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.061 [2024-11-20 15:30:49.793932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.061 [2024-11-20 15:30:49.822964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.061 [2024-11-20 15:30:49.822991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.061 [2024-11-20 15:30:49.822996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.061 [2024-11-20 15:30:49.823001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.061 [2024-11-20 15:30:49.823005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.061 [2024-11-20 15:30:49.823497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.061 [2024-11-20 15:30:50.016197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.320 [2024-11-20 15:30:50.048227] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.320 [2024-11-20 15:30:50.048423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=624198 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 624198 /var/tmp/bdevperf.sock 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 624198 ']' 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.580 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:01.580 "subsystems": [ 00:21:01.580 { 00:21:01.580 "subsystem": "keyring", 00:21:01.580 "config": [ 00:21:01.580 { 00:21:01.580 "method": "keyring_file_add_key", 00:21:01.580 "params": { 00:21:01.580 "name": "key0", 00:21:01.580 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:01.580 } 00:21:01.580 } 00:21:01.580 ] 00:21:01.580 }, 00:21:01.580 { 00:21:01.580 "subsystem": "iobuf", 00:21:01.580 "config": [ 00:21:01.580 { 00:21:01.580 "method": "iobuf_set_options", 00:21:01.580 "params": { 00:21:01.580 "small_pool_count": 8192, 00:21:01.580 "large_pool_count": 1024, 00:21:01.580 "small_bufsize": 8192, 00:21:01.580 "large_bufsize": 135168, 00:21:01.580 "enable_numa": false 00:21:01.580 } 00:21:01.580 } 00:21:01.580 ] 00:21:01.580 }, 00:21:01.580 { 00:21:01.580 "subsystem": "sock", 00:21:01.580 "config": [ 00:21:01.580 { 00:21:01.580 "method": "sock_set_default_impl", 00:21:01.580 "params": { 00:21:01.580 "impl_name": "posix" 00:21:01.580 } 00:21:01.580 }, 00:21:01.580 { 00:21:01.580 "method": "sock_impl_set_options", 00:21:01.580 "params": { 00:21:01.580 "impl_name": "ssl", 00:21:01.580 "recv_buf_size": 4096, 00:21:01.580 "send_buf_size": 4096, 00:21:01.580 "enable_recv_pipe": true, 00:21:01.580 "enable_quickack": false, 00:21:01.580 "enable_placement_id": 0, 00:21:01.580 "enable_zerocopy_send_server": true, 00:21:01.580 "enable_zerocopy_send_client": false, 00:21:01.580 "zerocopy_threshold": 0, 00:21:01.581 "tls_version": 0, 00:21:01.581 "enable_ktls": false 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "sock_impl_set_options", 00:21:01.581 "params": { 00:21:01.581 "impl_name": "posix", 00:21:01.581 "recv_buf_size": 2097152, 00:21:01.581 "send_buf_size": 2097152, 00:21:01.581 "enable_recv_pipe": true, 00:21:01.581 "enable_quickack": false, 00:21:01.581 "enable_placement_id": 0, 00:21:01.581 "enable_zerocopy_send_server": true, 00:21:01.581 "enable_zerocopy_send_client": false, 00:21:01.581 "zerocopy_threshold": 0, 00:21:01.581 "tls_version": 0, 00:21:01.581 "enable_ktls": false 00:21:01.581 } 00:21:01.581 } 00:21:01.581 ] 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "subsystem": "vmd", 00:21:01.581 "config": [] 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "subsystem": "accel", 00:21:01.581 "config": [ 00:21:01.581 { 00:21:01.581 "method": "accel_set_options", 00:21:01.581 "params": { 00:21:01.581 "small_cache_size": 128, 00:21:01.581 "large_cache_size": 16, 00:21:01.581 "task_count": 2048, 00:21:01.581 "sequence_count": 2048, 00:21:01.581 "buf_count": 2048 00:21:01.581 } 00:21:01.581 } 00:21:01.581 ] 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "subsystem": "bdev", 00:21:01.581 "config": [ 00:21:01.581 { 00:21:01.581 "method": "bdev_set_options", 00:21:01.581 "params": { 00:21:01.581 "bdev_io_pool_size": 65535, 00:21:01.581 "bdev_io_cache_size": 256, 00:21:01.581 "bdev_auto_examine": true, 00:21:01.581 "iobuf_small_cache_size": 128, 00:21:01.581 "iobuf_large_cache_size": 16 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "bdev_raid_set_options", 00:21:01.581 "params": { 00:21:01.581 "process_window_size_kb": 1024, 00:21:01.581 "process_max_bandwidth_mb_sec": 0 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "bdev_iscsi_set_options", 00:21:01.581 "params": { 00:21:01.581 "timeout_sec": 30 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "bdev_nvme_set_options", 00:21:01.581 "params": { 00:21:01.581 "action_on_timeout": "none", 00:21:01.581 "timeout_us": 0, 00:21:01.581 "timeout_admin_us": 0, 00:21:01.581 "keep_alive_timeout_ms": 10000, 00:21:01.581 "arbitration_burst": 0, 00:21:01.581 "low_priority_weight": 0, 00:21:01.581 "medium_priority_weight": 0, 00:21:01.581 "high_priority_weight": 0, 00:21:01.581 "nvme_adminq_poll_period_us": 10000, 00:21:01.581 "nvme_ioq_poll_period_us": 0, 00:21:01.581 "io_queue_requests": 512, 00:21:01.581 "delay_cmd_submit": true, 00:21:01.581 "transport_retry_count": 4, 00:21:01.581 "bdev_retry_count": 3, 00:21:01.581 "transport_ack_timeout": 0, 00:21:01.581 "ctrlr_loss_timeout_sec": 0, 00:21:01.581 "reconnect_delay_sec": 0, 00:21:01.581 "fast_io_fail_timeout_sec": 0, 00:21:01.581 "disable_auto_failback": false, 00:21:01.581 "generate_uuids": false, 00:21:01.581 "transport_tos": 0, 00:21:01.581 "nvme_error_stat": false, 00:21:01.581 "rdma_srq_size": 0, 00:21:01.581 "io_path_stat": false, 00:21:01.581 "allow_accel_sequence": false, 00:21:01.581 "rdma_max_cq_size": 0, 00:21:01.581 "rdma_cm_event_timeout_ms": 0, 00:21:01.581 "dhchap_digests": [ 00:21:01.581 "sha256", 00:21:01.581 "sha384", 00:21:01.581 "sha512" 00:21:01.581 ], 00:21:01.581 "dhchap_dhgroups": [ 00:21:01.581 "null", 00:21:01.581 "ffdhe2048", 00:21:01.581 "ffdhe3072", 00:21:01.581 "ffdhe4096", 00:21:01.581 "ffdhe6144", 00:21:01.581 "ffdhe8192" 00:21:01.581 ] 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "bdev_nvme_attach_controller", 00:21:01.581 "params": { 00:21:01.581 "name": "TLSTEST", 00:21:01.581 "trtype": "TCP", 00:21:01.581 "adrfam": "IPv4", 00:21:01.581 "traddr": "10.0.0.2", 00:21:01.581 "trsvcid": "4420", 00:21:01.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.581 "prchk_reftag": false, 00:21:01.581 "prchk_guard": false, 00:21:01.581 "ctrlr_loss_timeout_sec": 0, 00:21:01.581 "reconnect_delay_sec": 0, 00:21:01.581 "fast_io_fail_timeout_sec": 0, 00:21:01.581 "psk": "key0", 00:21:01.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.581 "hdgst": false, 00:21:01.581 "ddgst": false, 00:21:01.581 "multipath": "multipath" 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "bdev_nvme_set_hotplug", 00:21:01.581 "params": { 00:21:01.581 "period_us": 100000, 00:21:01.581 "enable": false 00:21:01.581 } 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "method": "bdev_wait_for_examine" 00:21:01.581 } 00:21:01.581 ] 00:21:01.581 }, 00:21:01.581 { 00:21:01.581 "subsystem": "nbd", 00:21:01.581 "config": [] 00:21:01.581 } 00:21:01.581 ] 00:21:01.581 }' 00:21:01.842 [2024-11-20 15:30:50.581888] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:01.842 [2024-11-20 15:30:50.581941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624198 ] 00:21:01.842 [2024-11-20 15:30:50.666209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.842 [2024-11-20 15:30:50.695425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.102 [2024-11-20 15:30:50.829361] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.688 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.688 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.688 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.688 Running I/O for 10 seconds... 00:21:04.632 5087.00 IOPS, 19.87 MiB/s [2024-11-20T14:30:54.532Z] 5148.50 IOPS, 20.11 MiB/s [2024-11-20T14:30:55.472Z] 4982.33 IOPS, 19.46 MiB/s [2024-11-20T14:30:56.856Z] 5098.50 IOPS, 19.92 MiB/s [2024-11-20T14:30:57.798Z] 5303.80 IOPS, 20.72 MiB/s [2024-11-20T14:30:58.740Z] 5315.33 IOPS, 20.76 MiB/s [2024-11-20T14:30:59.680Z] 5301.71 IOPS, 20.71 MiB/s [2024-11-20T14:31:00.618Z] 5381.88 IOPS, 21.02 MiB/s [2024-11-20T14:31:01.559Z] 5492.78 IOPS, 21.46 MiB/s [2024-11-20T14:31:01.559Z] 5450.20 IOPS, 21.29 MiB/s 00:21:12.599 Latency(us) 00:21:12.599 [2024-11-20T14:31:01.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.599 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.599 Verification LBA range: start 0x0 length 0x2000 00:21:12.599 TLSTESTn1 : 10.01 5456.15 21.31 0.00 0.00 23427.56 4642.13 24029.87 00:21:12.599 [2024-11-20T14:31:01.559Z] =================================================================================================================== 00:21:12.599 [2024-11-20T14:31:01.559Z] Total : 5456.15 21.31 0.00 0.00 23427.56 4642.13 24029.87 00:21:12.599 { 00:21:12.599 "results": [ 00:21:12.599 { 00:21:12.599 "job": "TLSTESTn1", 00:21:12.599 "core_mask": "0x4", 00:21:12.599 "workload": "verify", 00:21:12.599 "status": "finished", 00:21:12.599 "verify_range": { 00:21:12.599 "start": 0, 00:21:12.599 "length": 8192 00:21:12.599 }, 00:21:12.599 "queue_depth": 128, 00:21:12.599 "io_size": 4096, 00:21:12.599 "runtime": 10.012563, 00:21:12.599 "iops": 5456.145444478102, 00:21:12.599 "mibps": 21.313068142492586, 00:21:12.599 "io_failed": 0, 00:21:12.599 "io_timeout": 0, 00:21:12.599 "avg_latency_us": 23427.557226920493, 00:21:12.599 "min_latency_us": 4642.133333333333, 00:21:12.599 "max_latency_us": 24029.866666666665 00:21:12.599 } 00:21:12.599 ], 00:21:12.599 "core_count": 1 00:21:12.599 } 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 624198 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 624198 ']' 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 624198 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.599 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 624198 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 624198' 00:21:12.859 killing process with pid 624198 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 624198 00:21:12.859 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.859 00:21:12.859 Latency(us) 00:21:12.859 [2024-11-20T14:31:01.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.859 [2024-11-20T14:31:01.819Z] =================================================================================================================== 00:21:12.859 [2024-11-20T14:31:01.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 624198 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 624088 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 624088 ']' 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 624088 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.859 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.860 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 624088 00:21:12.860 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.860 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.860 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 624088' 00:21:12.860 killing process with pid 624088 00:21:12.860 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 624088 00:21:12.860 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 624088 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=626456 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 626456 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 626456 ']' 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.120 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.120 [2024-11-20 15:31:01.910065] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:13.120 [2024-11-20 15:31:01.910116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.120 [2024-11-20 15:31:02.003563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.120 [2024-11-20 15:31:02.039596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.120 [2024-11-20 15:31:02.039639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.120 [2024-11-20 15:31:02.039647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.120 [2024-11-20 15:31:02.039654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.120 [2024-11-20 15:31:02.039661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.120 [2024-11-20 15:31:02.040326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.69cw9NEbjJ 00:21:14.062 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.69cw9NEbjJ 00:21:14.063 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:14.063 [2024-11-20 15:31:02.920924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.063 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:14.323 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:14.583 [2024-11-20 15:31:03.289850] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.583 [2024-11-20 15:31:03.290103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.583 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.583 malloc0 00:21:14.583 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:14.843 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:21:15.104 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=626824 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 626824 /var/tmp/bdevperf.sock 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 626824 ']' 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.104 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.364 [2024-11-20 15:31:04.081976] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:15.364 [2024-11-20 15:31:04.082029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626824 ] 00:21:15.364 [2024-11-20 15:31:04.163418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.364 [2024-11-20 15:31:04.197120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.364 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.364 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.364 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:21:15.624 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:15.885 [2024-11-20 15:31:04.617600] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.885 nvme0n1 00:21:15.885 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.885 Running I/O for 1 seconds... 00:21:17.269 4607.00 IOPS, 18.00 MiB/s 00:21:17.270 Latency(us) 00:21:17.270 [2024-11-20T14:31:06.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.270 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.270 Verification LBA range: start 0x0 length 0x2000 00:21:17.270 nvme0n1 : 1.08 4394.27 17.17 0.00 0.00 28453.03 5679.79 76895.57 00:21:17.270 [2024-11-20T14:31:06.230Z] =================================================================================================================== 00:21:17.270 [2024-11-20T14:31:06.230Z] Total : 4394.27 17.17 0.00 0.00 28453.03 5679.79 76895.57 00:21:17.270 { 00:21:17.270 "results": [ 00:21:17.270 { 00:21:17.270 "job": "nvme0n1", 00:21:17.270 "core_mask": "0x2", 00:21:17.270 "workload": "verify", 00:21:17.270 "status": "finished", 00:21:17.270 "verify_range": { 00:21:17.270 "start": 0, 00:21:17.270 "length": 8192 00:21:17.270 }, 00:21:17.270 "queue_depth": 128, 00:21:17.270 "io_size": 4096, 00:21:17.270 "runtime": 1.07754, 00:21:17.270 "iops": 4394.268426230116, 00:21:17.270 "mibps": 17.165111039961392, 00:21:17.270 "io_failed": 0, 00:21:17.270 "io_timeout": 0, 00:21:17.270 "avg_latency_us": 28453.030521647306, 00:21:17.270 "min_latency_us": 5679.786666666667, 00:21:17.270 "max_latency_us": 76895.57333333333 00:21:17.270 } 00:21:17.270 ], 00:21:17.270 "core_count": 1 00:21:17.270 } 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 626824 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 626824 ']' 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 626824 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 626824 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 626824' 00:21:17.270 killing process with pid 626824 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 626824 00:21:17.270 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.270 00:21:17.270 Latency(us) 00:21:17.270 [2024-11-20T14:31:06.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.270 [2024-11-20T14:31:06.230Z] =================================================================================================================== 00:21:17.270 [2024-11-20T14:31:06.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.270 15:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 626824 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 626456 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 626456 ']' 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 626456 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 626456 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 626456' 00:21:17.270 killing process with pid 626456 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 626456 00:21:17.270 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 626456 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=627313 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 627313 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 627313 ']' 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.531 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.531 [2024-11-20 15:31:06.327695] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:17.531 [2024-11-20 15:31:06.327759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.531 [2024-11-20 15:31:06.424898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.531 [2024-11-20 15:31:06.472849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.531 [2024-11-20 15:31:06.472894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.531 [2024-11-20 15:31:06.472902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.532 [2024-11-20 15:31:06.472910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.532 [2024-11-20 15:31:06.472916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.532 [2024-11-20 15:31:06.473665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.475 [2024-11-20 15:31:07.185076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.475 malloc0 00:21:18.475 [2024-11-20 15:31:07.215229] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.475 [2024-11-20 15:31:07.215579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=627525 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 627525 /var/tmp/bdevperf.sock 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 627525 ']' 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.475 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.475 [2024-11-20 15:31:07.297449] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:18.475 [2024-11-20 15:31:07.297513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627525 ] 00:21:18.475 [2024-11-20 15:31:07.385423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.475 [2024-11-20 15:31:07.419521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.416 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.416 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.416 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.69cw9NEbjJ 00:21:19.416 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:19.676 [2024-11-20 15:31:08.385343] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.676 nvme0n1 00:21:19.676 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.676 Running I/O for 1 seconds... 00:21:20.878 4626.00 IOPS, 18.07 MiB/s 00:21:20.878 Latency(us) 00:21:20.878 [2024-11-20T14:31:09.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.878 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:20.878 Verification LBA range: start 0x0 length 0x2000 00:21:20.878 nvme0n1 : 1.02 4682.39 18.29 0.00 0.00 27183.00 4614.83 32549.55 00:21:20.878 [2024-11-20T14:31:09.838Z] =================================================================================================================== 00:21:20.878 [2024-11-20T14:31:09.838Z] Total : 4682.39 18.29 0.00 0.00 27183.00 4614.83 32549.55 00:21:20.878 { 00:21:20.878 "results": [ 00:21:20.878 { 00:21:20.878 "job": "nvme0n1", 00:21:20.878 "core_mask": "0x2", 00:21:20.878 "workload": "verify", 00:21:20.878 "status": "finished", 00:21:20.878 "verify_range": { 00:21:20.878 "start": 0, 00:21:20.878 "length": 8192 00:21:20.878 }, 00:21:20.878 "queue_depth": 128, 00:21:20.878 "io_size": 4096, 00:21:20.878 "runtime": 1.015293, 00:21:20.878 "iops": 4682.392176445617, 00:21:20.878 "mibps": 18.290594439240692, 00:21:20.878 "io_failed": 0, 00:21:20.878 "io_timeout": 0, 00:21:20.878 "avg_latency_us": 27183.00261954845, 00:21:20.878 "min_latency_us": 4614.826666666667, 00:21:20.878 "max_latency_us": 32549.546666666665 00:21:20.878 } 00:21:20.878 ], 00:21:20.878 "core_count": 1 00:21:20.878 } 00:21:20.878 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:20.878 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.878 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.878 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.878 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:20.878 "subsystems": [ 00:21:20.878 { 00:21:20.878 "subsystem": "keyring", 00:21:20.878 "config": [ 00:21:20.878 { 00:21:20.878 "method": "keyring_file_add_key", 00:21:20.878 "params": { 00:21:20.878 "name": "key0", 00:21:20.878 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:20.878 } 00:21:20.878 } 00:21:20.878 ] 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "subsystem": "iobuf", 00:21:20.878 "config": [ 00:21:20.878 { 00:21:20.878 "method": "iobuf_set_options", 00:21:20.878 "params": { 00:21:20.878 "small_pool_count": 8192, 00:21:20.878 "large_pool_count": 1024, 00:21:20.878 "small_bufsize": 8192, 00:21:20.878 "large_bufsize": 135168, 00:21:20.878 "enable_numa": false 00:21:20.878 } 00:21:20.878 } 00:21:20.878 ] 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "subsystem": "sock", 00:21:20.878 "config": [ 00:21:20.878 { 00:21:20.878 "method": "sock_set_default_impl", 00:21:20.878 "params": { 00:21:20.878 "impl_name": "posix" 00:21:20.878 } 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "method": "sock_impl_set_options", 00:21:20.878 "params": { 00:21:20.878 "impl_name": "ssl", 00:21:20.878 "recv_buf_size": 4096, 00:21:20.878 "send_buf_size": 4096, 00:21:20.878 "enable_recv_pipe": true, 00:21:20.878 "enable_quickack": false, 00:21:20.878 "enable_placement_id": 0, 00:21:20.878 "enable_zerocopy_send_server": true, 00:21:20.878 "enable_zerocopy_send_client": false, 00:21:20.878 "zerocopy_threshold": 0, 00:21:20.878 "tls_version": 0, 00:21:20.878 "enable_ktls": false 00:21:20.878 } 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "method": "sock_impl_set_options", 00:21:20.878 "params": { 00:21:20.878 "impl_name": "posix", 00:21:20.878 "recv_buf_size": 2097152, 00:21:20.878 "send_buf_size": 2097152, 00:21:20.878 "enable_recv_pipe": true, 00:21:20.878 "enable_quickack": false, 00:21:20.878 "enable_placement_id": 0, 00:21:20.878 "enable_zerocopy_send_server": true, 00:21:20.878 "enable_zerocopy_send_client": false, 00:21:20.878 "zerocopy_threshold": 0, 00:21:20.878 "tls_version": 0, 00:21:20.878 "enable_ktls": false 00:21:20.878 } 00:21:20.878 } 00:21:20.878 ] 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "subsystem": "vmd", 00:21:20.878 "config": [] 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "subsystem": "accel", 00:21:20.878 "config": [ 00:21:20.878 { 00:21:20.878 "method": "accel_set_options", 00:21:20.878 "params": { 00:21:20.878 "small_cache_size": 128, 00:21:20.878 "large_cache_size": 16, 00:21:20.878 "task_count": 2048, 00:21:20.878 "sequence_count": 2048, 00:21:20.878 "buf_count": 2048 00:21:20.878 } 00:21:20.878 } 00:21:20.878 ] 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "subsystem": "bdev", 00:21:20.878 "config": [ 00:21:20.878 { 00:21:20.878 "method": "bdev_set_options", 00:21:20.878 "params": { 00:21:20.878 "bdev_io_pool_size": 65535, 00:21:20.878 "bdev_io_cache_size": 256, 00:21:20.878 "bdev_auto_examine": true, 00:21:20.878 "iobuf_small_cache_size": 128, 00:21:20.878 "iobuf_large_cache_size": 16 00:21:20.878 } 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "method": "bdev_raid_set_options", 00:21:20.878 "params": { 00:21:20.878 "process_window_size_kb": 1024, 00:21:20.878 "process_max_bandwidth_mb_sec": 0 00:21:20.878 } 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "method": "bdev_iscsi_set_options", 00:21:20.878 "params": { 00:21:20.878 "timeout_sec": 30 00:21:20.878 } 00:21:20.878 }, 00:21:20.878 { 00:21:20.878 "method": "bdev_nvme_set_options", 00:21:20.878 "params": { 00:21:20.878 "action_on_timeout": "none", 00:21:20.879 "timeout_us": 0, 00:21:20.879 "timeout_admin_us": 0, 00:21:20.879 "keep_alive_timeout_ms": 10000, 00:21:20.879 "arbitration_burst": 0, 00:21:20.879 "low_priority_weight": 0, 00:21:20.879 "medium_priority_weight": 0, 00:21:20.879 "high_priority_weight": 0, 00:21:20.879 "nvme_adminq_poll_period_us": 10000, 00:21:20.879 "nvme_ioq_poll_period_us": 0, 00:21:20.879 "io_queue_requests": 0, 00:21:20.879 "delay_cmd_submit": true, 00:21:20.879 "transport_retry_count": 4, 00:21:20.879 "bdev_retry_count": 3, 00:21:20.879 "transport_ack_timeout": 0, 00:21:20.879 "ctrlr_loss_timeout_sec": 0, 00:21:20.879 "reconnect_delay_sec": 0, 00:21:20.879 "fast_io_fail_timeout_sec": 0, 00:21:20.879 "disable_auto_failback": false, 00:21:20.879 "generate_uuids": false, 00:21:20.879 "transport_tos": 0, 00:21:20.879 "nvme_error_stat": false, 00:21:20.879 "rdma_srq_size": 0, 00:21:20.879 "io_path_stat": false, 00:21:20.879 "allow_accel_sequence": false, 00:21:20.879 "rdma_max_cq_size": 0, 00:21:20.879 "rdma_cm_event_timeout_ms": 0, 00:21:20.879 "dhchap_digests": [ 00:21:20.879 "sha256", 00:21:20.879 "sha384", 00:21:20.879 "sha512" 00:21:20.879 ], 00:21:20.879 "dhchap_dhgroups": [ 00:21:20.879 "null", 00:21:20.879 "ffdhe2048", 00:21:20.879 "ffdhe3072", 00:21:20.879 "ffdhe4096", 00:21:20.879 "ffdhe6144", 00:21:20.879 "ffdhe8192" 00:21:20.879 ] 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "bdev_nvme_set_hotplug", 00:21:20.879 "params": { 00:21:20.879 "period_us": 100000, 00:21:20.879 "enable": false 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "bdev_malloc_create", 00:21:20.879 "params": { 00:21:20.879 "name": "malloc0", 00:21:20.879 "num_blocks": 8192, 00:21:20.879 "block_size": 4096, 00:21:20.879 "physical_block_size": 4096, 00:21:20.879 "uuid": "24c0c5fc-5692-474b-b153-2c493d7206a9", 00:21:20.879 "optimal_io_boundary": 0, 00:21:20.879 "md_size": 0, 00:21:20.879 "dif_type": 0, 00:21:20.879 "dif_is_head_of_md": false, 00:21:20.879 "dif_pi_format": 0 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "bdev_wait_for_examine" 00:21:20.879 } 00:21:20.879 ] 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "subsystem": "nbd", 00:21:20.879 "config": [] 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "subsystem": "scheduler", 00:21:20.879 "config": [ 00:21:20.879 { 00:21:20.879 "method": "framework_set_scheduler", 00:21:20.879 "params": { 00:21:20.879 "name": "static" 00:21:20.879 } 00:21:20.879 } 00:21:20.879 ] 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "subsystem": "nvmf", 00:21:20.879 "config": [ 00:21:20.879 { 00:21:20.879 "method": "nvmf_set_config", 00:21:20.879 "params": { 00:21:20.879 "discovery_filter": "match_any", 00:21:20.879 "admin_cmd_passthru": { 00:21:20.879 "identify_ctrlr": false 00:21:20.879 }, 00:21:20.879 "dhchap_digests": [ 00:21:20.879 "sha256", 00:21:20.879 "sha384", 00:21:20.879 "sha512" 00:21:20.879 ], 00:21:20.879 "dhchap_dhgroups": [ 00:21:20.879 "null", 00:21:20.879 "ffdhe2048", 00:21:20.879 "ffdhe3072", 00:21:20.879 "ffdhe4096", 00:21:20.879 "ffdhe6144", 00:21:20.879 "ffdhe8192" 00:21:20.879 ] 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_set_max_subsystems", 00:21:20.879 "params": { 00:21:20.879 "max_subsystems": 1024 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_set_crdt", 00:21:20.879 "params": { 00:21:20.879 "crdt1": 0, 00:21:20.879 "crdt2": 0, 00:21:20.879 "crdt3": 0 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_create_transport", 00:21:20.879 "params": { 00:21:20.879 "trtype": "TCP", 00:21:20.879 "max_queue_depth": 128, 00:21:20.879 "max_io_qpairs_per_ctrlr": 127, 00:21:20.879 "in_capsule_data_size": 4096, 00:21:20.879 "max_io_size": 131072, 00:21:20.879 "io_unit_size": 131072, 00:21:20.879 "max_aq_depth": 128, 00:21:20.879 "num_shared_buffers": 511, 00:21:20.879 "buf_cache_size": 4294967295, 00:21:20.879 "dif_insert_or_strip": false, 00:21:20.879 "zcopy": false, 00:21:20.879 "c2h_success": false, 00:21:20.879 "sock_priority": 0, 00:21:20.879 "abort_timeout_sec": 1, 00:21:20.879 "ack_timeout": 0, 00:21:20.879 "data_wr_pool_size": 0 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_create_subsystem", 00:21:20.879 "params": { 00:21:20.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.879 "allow_any_host": false, 00:21:20.879 "serial_number": "00000000000000000000", 00:21:20.879 "model_number": "SPDK bdev Controller", 00:21:20.879 "max_namespaces": 32, 00:21:20.879 "min_cntlid": 1, 00:21:20.879 "max_cntlid": 65519, 00:21:20.879 "ana_reporting": false 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_subsystem_add_host", 00:21:20.879 "params": { 00:21:20.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.879 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.879 "psk": "key0" 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_subsystem_add_ns", 00:21:20.879 "params": { 00:21:20.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.879 "namespace": { 00:21:20.879 "nsid": 1, 00:21:20.879 "bdev_name": "malloc0", 00:21:20.879 "nguid": "24C0C5FC5692474BB1532C493D7206A9", 00:21:20.879 "uuid": "24c0c5fc-5692-474b-b153-2c493d7206a9", 00:21:20.879 "no_auto_visible": false 00:21:20.879 } 00:21:20.879 } 00:21:20.879 }, 00:21:20.879 { 00:21:20.879 "method": "nvmf_subsystem_add_listener", 00:21:20.879 "params": { 00:21:20.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.879 "listen_address": { 00:21:20.879 "trtype": "TCP", 00:21:20.879 "adrfam": "IPv4", 00:21:20.879 "traddr": "10.0.0.2", 00:21:20.879 "trsvcid": "4420" 00:21:20.879 }, 00:21:20.879 "secure_channel": false, 00:21:20.879 "sock_impl": "ssl" 00:21:20.879 } 00:21:20.879 } 00:21:20.879 ] 00:21:20.879 } 00:21:20.879 ] 00:21:20.879 }' 00:21:20.879 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:21.140 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:21.140 "subsystems": [ 00:21:21.140 { 00:21:21.140 "subsystem": "keyring", 00:21:21.140 "config": [ 00:21:21.140 { 00:21:21.140 "method": "keyring_file_add_key", 00:21:21.140 "params": { 00:21:21.140 "name": "key0", 00:21:21.140 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:21.140 } 00:21:21.140 } 00:21:21.140 ] 00:21:21.140 }, 00:21:21.140 { 00:21:21.140 "subsystem": "iobuf", 00:21:21.140 "config": [ 00:21:21.140 { 00:21:21.140 "method": "iobuf_set_options", 00:21:21.140 "params": { 00:21:21.140 "small_pool_count": 8192, 00:21:21.140 "large_pool_count": 1024, 00:21:21.140 "small_bufsize": 8192, 00:21:21.140 "large_bufsize": 135168, 00:21:21.140 "enable_numa": false 00:21:21.140 } 00:21:21.140 } 00:21:21.141 ] 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "subsystem": "sock", 00:21:21.141 "config": [ 00:21:21.141 { 00:21:21.141 "method": "sock_set_default_impl", 00:21:21.141 "params": { 00:21:21.141 "impl_name": "posix" 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "sock_impl_set_options", 00:21:21.141 "params": { 00:21:21.141 "impl_name": "ssl", 00:21:21.141 "recv_buf_size": 4096, 00:21:21.141 "send_buf_size": 4096, 00:21:21.141 "enable_recv_pipe": true, 00:21:21.141 "enable_quickack": false, 00:21:21.141 "enable_placement_id": 0, 00:21:21.141 "enable_zerocopy_send_server": true, 00:21:21.141 "enable_zerocopy_send_client": false, 00:21:21.141 "zerocopy_threshold": 0, 00:21:21.141 "tls_version": 0, 00:21:21.141 "enable_ktls": false 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "sock_impl_set_options", 00:21:21.141 "params": { 00:21:21.141 "impl_name": "posix", 00:21:21.141 "recv_buf_size": 2097152, 00:21:21.141 "send_buf_size": 2097152, 00:21:21.141 "enable_recv_pipe": true, 00:21:21.141 "enable_quickack": false, 00:21:21.141 "enable_placement_id": 0, 00:21:21.141 "enable_zerocopy_send_server": true, 00:21:21.141 "enable_zerocopy_send_client": false, 00:21:21.141 "zerocopy_threshold": 0, 00:21:21.141 "tls_version": 0, 00:21:21.141 "enable_ktls": false 00:21:21.141 } 00:21:21.141 } 00:21:21.141 ] 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "subsystem": "vmd", 00:21:21.141 "config": [] 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "subsystem": "accel", 00:21:21.141 "config": [ 00:21:21.141 { 00:21:21.141 "method": "accel_set_options", 00:21:21.141 "params": { 00:21:21.141 "small_cache_size": 128, 00:21:21.141 "large_cache_size": 16, 00:21:21.141 "task_count": 2048, 00:21:21.141 "sequence_count": 2048, 00:21:21.141 "buf_count": 2048 00:21:21.141 } 00:21:21.141 } 00:21:21.141 ] 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "subsystem": "bdev", 00:21:21.141 "config": [ 00:21:21.141 { 00:21:21.141 "method": "bdev_set_options", 00:21:21.141 "params": { 00:21:21.141 "bdev_io_pool_size": 65535, 00:21:21.141 "bdev_io_cache_size": 256, 00:21:21.141 "bdev_auto_examine": true, 00:21:21.141 "iobuf_small_cache_size": 128, 00:21:21.141 "iobuf_large_cache_size": 16 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_raid_set_options", 00:21:21.141 "params": { 00:21:21.141 "process_window_size_kb": 1024, 00:21:21.141 "process_max_bandwidth_mb_sec": 0 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_iscsi_set_options", 00:21:21.141 "params": { 00:21:21.141 "timeout_sec": 30 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_nvme_set_options", 00:21:21.141 "params": { 00:21:21.141 "action_on_timeout": "none", 00:21:21.141 "timeout_us": 0, 00:21:21.141 "timeout_admin_us": 0, 00:21:21.141 "keep_alive_timeout_ms": 10000, 00:21:21.141 "arbitration_burst": 0, 00:21:21.141 "low_priority_weight": 0, 00:21:21.141 "medium_priority_weight": 0, 00:21:21.141 "high_priority_weight": 0, 00:21:21.141 "nvme_adminq_poll_period_us": 10000, 00:21:21.141 "nvme_ioq_poll_period_us": 0, 00:21:21.141 "io_queue_requests": 512, 00:21:21.141 "delay_cmd_submit": true, 00:21:21.141 "transport_retry_count": 4, 00:21:21.141 "bdev_retry_count": 3, 00:21:21.141 "transport_ack_timeout": 0, 00:21:21.141 "ctrlr_loss_timeout_sec": 0, 00:21:21.141 "reconnect_delay_sec": 0, 00:21:21.141 "fast_io_fail_timeout_sec": 0, 00:21:21.141 "disable_auto_failback": false, 00:21:21.141 "generate_uuids": false, 00:21:21.141 "transport_tos": 0, 00:21:21.141 "nvme_error_stat": false, 00:21:21.141 "rdma_srq_size": 0, 00:21:21.141 "io_path_stat": false, 00:21:21.141 "allow_accel_sequence": false, 00:21:21.141 "rdma_max_cq_size": 0, 00:21:21.141 "rdma_cm_event_timeout_ms": 0, 00:21:21.141 "dhchap_digests": [ 00:21:21.141 "sha256", 00:21:21.141 "sha384", 00:21:21.141 "sha512" 00:21:21.141 ], 00:21:21.141 "dhchap_dhgroups": [ 00:21:21.141 "null", 00:21:21.141 "ffdhe2048", 00:21:21.141 "ffdhe3072", 00:21:21.141 "ffdhe4096", 00:21:21.141 "ffdhe6144", 00:21:21.141 "ffdhe8192" 00:21:21.141 ] 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_nvme_attach_controller", 00:21:21.141 "params": { 00:21:21.141 "name": "nvme0", 00:21:21.141 "trtype": "TCP", 00:21:21.141 "adrfam": "IPv4", 00:21:21.141 "traddr": "10.0.0.2", 00:21:21.141 "trsvcid": "4420", 00:21:21.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.141 "prchk_reftag": false, 00:21:21.141 "prchk_guard": false, 00:21:21.141 "ctrlr_loss_timeout_sec": 0, 00:21:21.141 "reconnect_delay_sec": 0, 00:21:21.141 "fast_io_fail_timeout_sec": 0, 00:21:21.141 "psk": "key0", 00:21:21.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.141 "hdgst": false, 00:21:21.141 "ddgst": false, 00:21:21.141 "multipath": "multipath" 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_nvme_set_hotplug", 00:21:21.141 "params": { 00:21:21.141 "period_us": 100000, 00:21:21.141 "enable": false 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_enable_histogram", 00:21:21.141 "params": { 00:21:21.141 "name": "nvme0n1", 00:21:21.141 "enable": true 00:21:21.141 } 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "method": "bdev_wait_for_examine" 00:21:21.141 } 00:21:21.141 ] 00:21:21.141 }, 00:21:21.141 { 00:21:21.141 "subsystem": "nbd", 00:21:21.141 "config": [] 00:21:21.141 } 00:21:21.141 ] 00:21:21.141 }' 00:21:21.141 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 627525 00:21:21.141 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 627525 ']' 00:21:21.141 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 627525 00:21:21.141 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.141 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.141 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627525 00:21:21.141 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:21.141 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:21.141 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627525' 00:21:21.141 killing process with pid 627525 00:21:21.141 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 627525 00:21:21.141 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.141 00:21:21.141 Latency(us) 00:21:21.141 [2024-11-20T14:31:10.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.141 [2024-11-20T14:31:10.101Z] =================================================================================================================== 00:21:21.141 [2024-11-20T14:31:10.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.141 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 627525 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 627313 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 627313 ']' 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 627313 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627313 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627313' 00:21:21.403 killing process with pid 627313 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 627313 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 627313 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.403 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:21.403 "subsystems": [ 00:21:21.403 { 00:21:21.403 "subsystem": "keyring", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "keyring_file_add_key", 00:21:21.403 "params": { 00:21:21.403 "name": "key0", 00:21:21.403 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:21.403 } 00:21:21.403 } 00:21:21.403 ] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "iobuf", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "iobuf_set_options", 00:21:21.403 "params": { 00:21:21.403 "small_pool_count": 8192, 00:21:21.403 "large_pool_count": 1024, 00:21:21.403 "small_bufsize": 8192, 00:21:21.403 "large_bufsize": 135168, 00:21:21.403 "enable_numa": false 00:21:21.403 } 00:21:21.403 } 00:21:21.403 ] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "sock", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "sock_set_default_impl", 00:21:21.403 "params": { 00:21:21.403 "impl_name": "posix" 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "sock_impl_set_options", 00:21:21.403 "params": { 00:21:21.403 "impl_name": "ssl", 00:21:21.403 "recv_buf_size": 4096, 00:21:21.403 "send_buf_size": 4096, 00:21:21.403 "enable_recv_pipe": true, 00:21:21.403 "enable_quickack": false, 00:21:21.403 "enable_placement_id": 0, 00:21:21.403 "enable_zerocopy_send_server": true, 00:21:21.403 "enable_zerocopy_send_client": false, 00:21:21.403 "zerocopy_threshold": 0, 00:21:21.403 "tls_version": 0, 00:21:21.403 "enable_ktls": false 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "sock_impl_set_options", 00:21:21.403 "params": { 00:21:21.403 "impl_name": "posix", 00:21:21.403 "recv_buf_size": 2097152, 00:21:21.403 "send_buf_size": 2097152, 00:21:21.403 "enable_recv_pipe": true, 00:21:21.403 "enable_quickack": false, 00:21:21.403 "enable_placement_id": 0, 00:21:21.403 "enable_zerocopy_send_server": true, 00:21:21.403 "enable_zerocopy_send_client": false, 00:21:21.403 "zerocopy_threshold": 0, 00:21:21.403 "tls_version": 0, 00:21:21.403 "enable_ktls": false 00:21:21.403 } 00:21:21.403 } 00:21:21.403 ] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "vmd", 00:21:21.403 "config": [] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "accel", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "accel_set_options", 00:21:21.403 "params": { 00:21:21.403 "small_cache_size": 128, 00:21:21.403 "large_cache_size": 16, 00:21:21.403 "task_count": 2048, 00:21:21.403 "sequence_count": 2048, 00:21:21.403 "buf_count": 2048 00:21:21.403 } 00:21:21.403 } 00:21:21.403 ] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "bdev", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "bdev_set_options", 00:21:21.403 "params": { 00:21:21.403 "bdev_io_pool_size": 65535, 00:21:21.403 "bdev_io_cache_size": 256, 00:21:21.403 "bdev_auto_examine": true, 00:21:21.403 "iobuf_small_cache_size": 128, 00:21:21.403 "iobuf_large_cache_size": 16 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "bdev_raid_set_options", 00:21:21.403 "params": { 00:21:21.403 "process_window_size_kb": 1024, 00:21:21.403 "process_max_bandwidth_mb_sec": 0 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "bdev_iscsi_set_options", 00:21:21.403 "params": { 00:21:21.403 "timeout_sec": 30 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "bdev_nvme_set_options", 00:21:21.403 "params": { 00:21:21.403 "action_on_timeout": "none", 00:21:21.403 "timeout_us": 0, 00:21:21.403 "timeout_admin_us": 0, 00:21:21.403 "keep_alive_timeout_ms": 10000, 00:21:21.403 "arbitration_burst": 0, 00:21:21.403 "low_priority_weight": 0, 00:21:21.403 "medium_priority_weight": 0, 00:21:21.403 "high_priority_weight": 0, 00:21:21.403 "nvme_adminq_poll_period_us": 10000, 00:21:21.403 "nvme_ioq_poll_period_us": 0, 00:21:21.403 "io_queue_requests": 0, 00:21:21.403 "delay_cmd_submit": true, 00:21:21.403 "transport_retry_count": 4, 00:21:21.403 "bdev_retry_count": 3, 00:21:21.403 "transport_ack_timeout": 0, 00:21:21.403 "ctrlr_loss_timeout_sec": 0, 00:21:21.403 "reconnect_delay_sec": 0, 00:21:21.403 "fast_io_fail_timeout_sec": 0, 00:21:21.403 "disable_auto_failback": false, 00:21:21.403 "generate_uuids": false, 00:21:21.403 "transport_tos": 0, 00:21:21.403 "nvme_error_stat": false, 00:21:21.403 "rdma_srq_size": 0, 00:21:21.403 "io_path_stat": false, 00:21:21.403 "allow_accel_sequence": false, 00:21:21.403 "rdma_max_cq_size": 0, 00:21:21.403 "rdma_cm_event_timeout_ms": 0, 00:21:21.403 "dhchap_digests": [ 00:21:21.403 "sha256", 00:21:21.403 "sha384", 00:21:21.403 "sha512" 00:21:21.403 ], 00:21:21.403 "dhchap_dhgroups": [ 00:21:21.403 "null", 00:21:21.403 "ffdhe2048", 00:21:21.403 "ffdhe3072", 00:21:21.403 "ffdhe4096", 00:21:21.403 "ffdhe6144", 00:21:21.403 "ffdhe8192" 00:21:21.403 ] 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "bdev_nvme_set_hotplug", 00:21:21.403 "params": { 00:21:21.403 "period_us": 100000, 00:21:21.403 "enable": false 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "bdev_malloc_create", 00:21:21.403 "params": { 00:21:21.403 "name": "malloc0", 00:21:21.403 "num_blocks": 8192, 00:21:21.403 "block_size": 4096, 00:21:21.403 "physical_block_size": 4096, 00:21:21.403 "uuid": "24c0c5fc-5692-474b-b153-2c493d7206a9", 00:21:21.403 "optimal_io_boundary": 0, 00:21:21.403 "md_size": 0, 00:21:21.403 "dif_type": 0, 00:21:21.403 "dif_is_head_of_md": false, 00:21:21.403 "dif_pi_format": 0 00:21:21.403 } 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "method": "bdev_wait_for_examine" 00:21:21.403 } 00:21:21.403 ] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "nbd", 00:21:21.403 "config": [] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "scheduler", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "framework_set_scheduler", 00:21:21.403 "params": { 00:21:21.403 "name": "static" 00:21:21.403 } 00:21:21.403 } 00:21:21.403 ] 00:21:21.403 }, 00:21:21.403 { 00:21:21.403 "subsystem": "nvmf", 00:21:21.403 "config": [ 00:21:21.403 { 00:21:21.403 "method": "nvmf_set_config", 00:21:21.403 "params": { 00:21:21.403 "discovery_filter": "match_any", 00:21:21.403 "admin_cmd_passthru": { 00:21:21.403 "identify_ctrlr": false 00:21:21.403 }, 00:21:21.403 "dhchap_digests": [ 00:21:21.403 "sha256", 00:21:21.403 "sha384", 00:21:21.403 "sha512" 00:21:21.403 ], 00:21:21.403 "dhchap_dhgroups": [ 00:21:21.403 "null", 00:21:21.403 "ffdhe2048", 00:21:21.403 "ffdhe3072", 00:21:21.404 "ffdhe4096", 00:21:21.404 "ffdhe6144", 00:21:21.404 "ffdhe8192" 00:21:21.404 ] 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_set_max_subsystems", 00:21:21.404 "params": { 00:21:21.404 "max_subsystems": 1024 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_set_crdt", 00:21:21.404 "params": { 00:21:21.404 "crdt1": 0, 00:21:21.404 "crdt2": 0, 00:21:21.404 "crdt3": 0 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_create_transport", 00:21:21.404 "params": { 00:21:21.404 "trtype": "TCP", 00:21:21.404 "max_queue_depth": 128, 00:21:21.404 "max_io_qpairs_per_ctrlr": 127, 00:21:21.404 "in_capsule_data_size": 4096, 00:21:21.404 "max_io_size": 131072, 00:21:21.404 "io_unit_size": 131072, 00:21:21.404 "max_aq_depth": 128, 00:21:21.404 "num_shared_buffers": 511, 00:21:21.404 "buf_cache_size": 4294967295, 00:21:21.404 "dif_insert_or_strip": false, 00:21:21.404 "zcopy": false, 00:21:21.404 "c2h_success": false, 00:21:21.404 "sock_priority": 0, 00:21:21.404 "abort_timeout_sec": 1, 00:21:21.404 "ack_timeout": 0, 00:21:21.404 "data_wr_pool_size": 0 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_create_subsystem", 00:21:21.404 "params": { 00:21:21.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.404 "allow_any_host": false, 00:21:21.404 "serial_number": "00000000000000000000", 00:21:21.404 "model_number": "SPDK bdev Controller", 00:21:21.404 "max_namespaces": 32, 00:21:21.404 "min_cntlid": 1, 00:21:21.404 "max_cntlid": 65519, 00:21:21.404 "ana_reporting": false 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_subsystem_add_host", 00:21:21.404 "params": { 00:21:21.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.404 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.404 "psk": "key0" 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_subsystem_add_ns", 00:21:21.404 "params": { 00:21:21.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.404 "namespace": { 00:21:21.404 "nsid": 1, 00:21:21.404 "bdev_name": "malloc0", 00:21:21.404 "nguid": "24C0C5FC5692474BB1532C493D7206A9", 00:21:21.404 "uuid": "24c0c5fc-5692-474b-b153-2c493d7206a9", 00:21:21.404 "no_auto_visible": false 00:21:21.404 } 00:21:21.404 } 00:21:21.404 }, 00:21:21.404 { 00:21:21.404 "method": "nvmf_subsystem_add_listener", 00:21:21.404 "params": { 00:21:21.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.404 "listen_address": { 00:21:21.404 "trtype": "TCP", 00:21:21.404 "adrfam": "IPv4", 00:21:21.404 "traddr": "10.0.0.2", 00:21:21.404 "trsvcid": "4420" 00:21:21.404 }, 00:21:21.404 "secure_channel": false, 00:21:21.404 "sock_impl": "ssl" 00:21:21.404 } 00:21:21.404 } 00:21:21.404 ] 00:21:21.404 } 00:21:21.404 ] 00:21:21.404 }' 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=628208 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 628208 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 628208 ']' 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.404 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 [2024-11-20 15:31:10.384912] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:21.665 [2024-11-20 15:31:10.384967] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.665 [2024-11-20 15:31:10.472094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.665 [2024-11-20 15:31:10.500778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.665 [2024-11-20 15:31:10.500806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.665 [2024-11-20 15:31:10.500812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.665 [2024-11-20 15:31:10.500817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.665 [2024-11-20 15:31:10.500822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.665 [2024-11-20 15:31:10.501295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.926 [2024-11-20 15:31:10.694404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.926 [2024-11-20 15:31:10.726437] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.926 [2024-11-20 15:31:10.726649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=628259 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 628259 /var/tmp/bdevperf.sock 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 628259 ']' 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.497 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:22.497 "subsystems": [ 00:21:22.497 { 00:21:22.497 "subsystem": "keyring", 00:21:22.497 "config": [ 00:21:22.497 { 00:21:22.497 "method": "keyring_file_add_key", 00:21:22.497 "params": { 00:21:22.497 "name": "key0", 00:21:22.497 "path": "/tmp/tmp.69cw9NEbjJ" 00:21:22.497 } 00:21:22.497 } 00:21:22.497 ] 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "subsystem": "iobuf", 00:21:22.497 "config": [ 00:21:22.497 { 00:21:22.497 "method": "iobuf_set_options", 00:21:22.497 "params": { 00:21:22.497 "small_pool_count": 8192, 00:21:22.497 "large_pool_count": 1024, 00:21:22.497 "small_bufsize": 8192, 00:21:22.497 "large_bufsize": 135168, 00:21:22.497 "enable_numa": false 00:21:22.497 } 00:21:22.497 } 00:21:22.497 ] 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "subsystem": "sock", 00:21:22.497 "config": [ 00:21:22.497 { 00:21:22.497 "method": "sock_set_default_impl", 00:21:22.497 "params": { 00:21:22.497 "impl_name": "posix" 00:21:22.497 } 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "method": "sock_impl_set_options", 00:21:22.497 "params": { 00:21:22.497 "impl_name": "ssl", 00:21:22.497 "recv_buf_size": 4096, 00:21:22.497 "send_buf_size": 4096, 00:21:22.497 "enable_recv_pipe": true, 00:21:22.497 "enable_quickack": false, 00:21:22.497 "enable_placement_id": 0, 00:21:22.497 "enable_zerocopy_send_server": true, 00:21:22.497 "enable_zerocopy_send_client": false, 00:21:22.497 "zerocopy_threshold": 0, 00:21:22.497 "tls_version": 0, 00:21:22.497 "enable_ktls": false 00:21:22.497 } 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "method": "sock_impl_set_options", 00:21:22.497 "params": { 00:21:22.497 "impl_name": "posix", 00:21:22.497 "recv_buf_size": 2097152, 00:21:22.497 "send_buf_size": 2097152, 00:21:22.497 "enable_recv_pipe": true, 00:21:22.497 "enable_quickack": false, 00:21:22.497 "enable_placement_id": 0, 00:21:22.497 "enable_zerocopy_send_server": true, 00:21:22.497 "enable_zerocopy_send_client": false, 00:21:22.497 "zerocopy_threshold": 0, 00:21:22.497 "tls_version": 0, 00:21:22.497 "enable_ktls": false 00:21:22.497 } 00:21:22.497 } 00:21:22.497 ] 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "subsystem": "vmd", 00:21:22.497 "config": [] 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "subsystem": "accel", 00:21:22.497 "config": [ 00:21:22.497 { 00:21:22.497 "method": "accel_set_options", 00:21:22.497 "params": { 00:21:22.497 "small_cache_size": 128, 00:21:22.497 "large_cache_size": 16, 00:21:22.497 "task_count": 2048, 00:21:22.497 "sequence_count": 2048, 00:21:22.497 "buf_count": 2048 00:21:22.497 } 00:21:22.497 } 00:21:22.497 ] 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "subsystem": "bdev", 00:21:22.497 "config": [ 00:21:22.497 { 00:21:22.497 "method": "bdev_set_options", 00:21:22.498 "params": { 00:21:22.498 "bdev_io_pool_size": 65535, 00:21:22.498 "bdev_io_cache_size": 256, 00:21:22.498 "bdev_auto_examine": true, 00:21:22.498 "iobuf_small_cache_size": 128, 00:21:22.498 "iobuf_large_cache_size": 16 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_raid_set_options", 00:21:22.498 "params": { 00:21:22.498 "process_window_size_kb": 1024, 00:21:22.498 "process_max_bandwidth_mb_sec": 0 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_iscsi_set_options", 00:21:22.498 "params": { 00:21:22.498 "timeout_sec": 30 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_nvme_set_options", 00:21:22.498 "params": { 00:21:22.498 "action_on_timeout": "none", 00:21:22.498 "timeout_us": 0, 00:21:22.498 "timeout_admin_us": 0, 00:21:22.498 "keep_alive_timeout_ms": 10000, 00:21:22.498 "arbitration_burst": 0, 00:21:22.498 "low_priority_weight": 0, 00:21:22.498 "medium_priority_weight": 0, 00:21:22.498 "high_priority_weight": 0, 00:21:22.498 "nvme_adminq_poll_period_us": 10000, 00:21:22.498 "nvme_ioq_poll_period_us": 0, 00:21:22.498 "io_queue_requests": 512, 00:21:22.498 "delay_cmd_submit": true, 00:21:22.498 "transport_retry_count": 4, 00:21:22.498 "bdev_retry_count": 3, 00:21:22.498 "transport_ack_timeout": 0, 00:21:22.498 "ctrlr_loss_timeout_sec": 0, 00:21:22.498 "reconnect_delay_sec": 0, 00:21:22.498 "fast_io_fail_timeout_sec": 0, 00:21:22.498 "disable_auto_failback": false, 00:21:22.498 "generate_uuids": false, 00:21:22.498 "transport_tos": 0, 00:21:22.498 "nvme_error_stat": false, 00:21:22.498 "rdma_srq_size": 0, 00:21:22.498 "io_path_stat": false, 00:21:22.498 "allow_accel_sequence": false, 00:21:22.498 "rdma_max_cq_size": 0, 00:21:22.498 "rdma_cm_event_timeout_ms": 0, 00:21:22.498 "dhchap_digests": [ 00:21:22.498 "sha256", 00:21:22.498 "sha384", 00:21:22.498 "sha512" 00:21:22.498 ], 00:21:22.498 "dhchap_dhgroups": [ 00:21:22.498 "null", 00:21:22.498 "ffdhe2048", 00:21:22.498 "ffdhe3072", 00:21:22.498 "ffdhe4096", 00:21:22.498 "ffdhe6144", 00:21:22.498 "ffdhe8192" 00:21:22.498 ] 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_nvme_attach_controller", 00:21:22.498 "params": { 00:21:22.498 "name": "nvme0", 00:21:22.498 "trtype": "TCP", 00:21:22.498 "adrfam": "IPv4", 00:21:22.498 "traddr": "10.0.0.2", 00:21:22.498 "trsvcid": "4420", 00:21:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.498 "prchk_reftag": false, 00:21:22.498 "prchk_guard": false, 00:21:22.498 "ctrlr_loss_timeout_sec": 0, 00:21:22.498 "reconnect_delay_sec": 0, 00:21:22.498 "fast_io_fail_timeout_sec": 0, 00:21:22.498 "psk": "key0", 00:21:22.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.498 "hdgst": false, 00:21:22.498 "ddgst": false, 00:21:22.498 "multipath": "multipath" 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_nvme_set_hotplug", 00:21:22.498 "params": { 00:21:22.498 "period_us": 100000, 00:21:22.498 "enable": false 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_enable_histogram", 00:21:22.498 "params": { 00:21:22.498 "name": "nvme0n1", 00:21:22.498 "enable": true 00:21:22.498 } 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "method": "bdev_wait_for_examine" 00:21:22.498 } 00:21:22.498 ] 00:21:22.498 }, 00:21:22.498 { 00:21:22.498 "subsystem": "nbd", 00:21:22.498 "config": [] 00:21:22.498 } 00:21:22.498 ] 00:21:22.498 }' 00:21:22.498 [2024-11-20 15:31:11.265073] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:22.498 [2024-11-20 15:31:11.265127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628259 ] 00:21:22.498 [2024-11-20 15:31:11.347203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.498 [2024-11-20 15:31:11.377170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.759 [2024-11-20 15:31:11.512176] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.329 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.329 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:23.329 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.329 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:23.329 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.329 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.588 Running I/O for 1 seconds... 00:21:24.530 5312.00 IOPS, 20.75 MiB/s 00:21:24.530 Latency(us) 00:21:24.530 [2024-11-20T14:31:13.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.530 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.530 Verification LBA range: start 0x0 length 0x2000 00:21:24.530 nvme0n1 : 1.05 5189.90 20.27 0.00 0.00 24116.62 4642.13 46093.65 00:21:24.530 [2024-11-20T14:31:13.490Z] =================================================================================================================== 00:21:24.530 [2024-11-20T14:31:13.490Z] Total : 5189.90 20.27 0.00 0.00 24116.62 4642.13 46093.65 00:21:24.530 { 00:21:24.530 "results": [ 00:21:24.530 { 00:21:24.530 "job": "nvme0n1", 00:21:24.530 "core_mask": "0x2", 00:21:24.530 "workload": "verify", 00:21:24.530 "status": "finished", 00:21:24.530 "verify_range": { 00:21:24.530 "start": 0, 00:21:24.530 "length": 8192 00:21:24.530 }, 00:21:24.530 "queue_depth": 128, 00:21:24.530 "io_size": 4096, 00:21:24.530 "runtime": 1.04819, 00:21:24.530 "iops": 5189.898777893321, 00:21:24.530 "mibps": 20.273042101145784, 00:21:24.530 "io_failed": 0, 00:21:24.530 "io_timeout": 0, 00:21:24.530 "avg_latency_us": 24116.618039215682, 00:21:24.530 "min_latency_us": 4642.133333333333, 00:21:24.530 "max_latency_us": 46093.653333333335 00:21:24.530 } 00:21:24.530 ], 00:21:24.530 "core_count": 1 00:21:24.530 } 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:24.530 nvmf_trace.0 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 628259 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 628259 ']' 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 628259 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.530 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 628259 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 628259' 00:21:24.790 killing process with pid 628259 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 628259 00:21:24.790 Received shutdown signal, test time was about 1.000000 seconds 00:21:24.790 00:21:24.790 Latency(us) 00:21:24.790 [2024-11-20T14:31:13.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.790 [2024-11-20T14:31:13.750Z] =================================================================================================================== 00:21:24.790 [2024-11-20T14:31:13.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 628259 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.790 rmmod nvme_tcp 00:21:24.790 rmmod nvme_fabrics 00:21:24.790 rmmod nvme_keyring 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 628208 ']' 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 628208 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 628208 ']' 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 628208 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.790 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 628208 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 628208' 00:21:25.050 killing process with pid 628208 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 628208 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 628208 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.050 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3kWrHgizK5 /tmp/tmp.nYohHqZKDM /tmp/tmp.69cw9NEbjJ 00:21:27.594 00:21:27.594 real 1m25.674s 00:21:27.594 user 2m14.120s 00:21:27.594 sys 0m26.968s 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.594 ************************************ 00:21:27.594 END TEST nvmf_tls 00:21:27.594 ************************************ 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.594 15:31:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.594 ************************************ 00:21:27.594 START TEST nvmf_fips 00:21:27.594 ************************************ 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.594 * Looking for test storage... 00:21:27.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.594 --rc genhtml_branch_coverage=1 00:21:27.594 --rc genhtml_function_coverage=1 00:21:27.594 --rc genhtml_legend=1 00:21:27.594 --rc geninfo_all_blocks=1 00:21:27.594 --rc geninfo_unexecuted_blocks=1 00:21:27.594 00:21:27.594 ' 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.594 --rc genhtml_branch_coverage=1 00:21:27.594 --rc genhtml_function_coverage=1 00:21:27.594 --rc genhtml_legend=1 00:21:27.594 --rc geninfo_all_blocks=1 00:21:27.594 --rc geninfo_unexecuted_blocks=1 00:21:27.594 00:21:27.594 ' 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.594 --rc genhtml_branch_coverage=1 00:21:27.594 --rc genhtml_function_coverage=1 00:21:27.594 --rc genhtml_legend=1 00:21:27.594 --rc geninfo_all_blocks=1 00:21:27.594 --rc geninfo_unexecuted_blocks=1 00:21:27.594 00:21:27.594 ' 00:21:27.594 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.594 --rc genhtml_branch_coverage=1 00:21:27.595 --rc genhtml_function_coverage=1 00:21:27.595 --rc genhtml_legend=1 00:21:27.595 --rc geninfo_all_blocks=1 00:21:27.595 --rc geninfo_unexecuted_blocks=1 00:21:27.595 00:21:27.595 ' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:27.595 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:27.596 Error setting digest 00:21:27.596 40A2C87E597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:27.596 40A2C87E597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.596 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:35.733 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:35.733 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:35.733 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:35.733 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:21:35.733 00:21:35.733 --- 10.0.0.2 ping statistics --- 00:21:35.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.733 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:21:35.733 00:21:35.733 --- 10.0.0.1 ping statistics --- 00:21:35.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.733 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=633053 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 633053 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 633053 ']' 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.733 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.733 [2024-11-20 15:31:23.818446] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:35.733 [2024-11-20 15:31:23.818502] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.733 [2024-11-20 15:31:23.887899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.733 [2024-11-20 15:31:23.916212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.733 [2024-11-20 15:31:23.916241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.733 [2024-11-20 15:31:23.916246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.733 [2024-11-20 15:31:23.916252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.733 [2024-11-20 15:31:23.916256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.733 [2024-11-20 15:31:23.916697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xZN 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xZN 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xZN 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xZN 00:21:35.733 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.994 [2024-11-20 15:31:24.807886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.994 [2024-11-20 15:31:24.823896] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.994 [2024-11-20 15:31:24.824084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.994 malloc0 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=633297 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 633297 /var/tmp/bdevperf.sock 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 633297 ']' 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.994 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.254 [2024-11-20 15:31:24.957931] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:36.254 [2024-11-20 15:31:24.957987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633297 ] 00:21:36.254 [2024-11-20 15:31:25.047168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.254 [2024-11-20 15:31:25.082165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.824 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.824 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:36.824 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xZN 00:21:37.085 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:37.345 [2024-11-20 15:31:26.077721] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.345 TLSTESTn1 00:21:37.345 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.345 Running I/O for 10 seconds... 00:21:39.666 4855.00 IOPS, 18.96 MiB/s [2024-11-20T14:31:29.567Z] 5186.00 IOPS, 20.26 MiB/s [2024-11-20T14:31:30.508Z] 5539.00 IOPS, 21.64 MiB/s [2024-11-20T14:31:31.449Z] 5772.75 IOPS, 22.55 MiB/s [2024-11-20T14:31:32.389Z] 5826.00 IOPS, 22.76 MiB/s [2024-11-20T14:31:33.388Z] 5870.67 IOPS, 22.93 MiB/s [2024-11-20T14:31:34.331Z] 5933.29 IOPS, 23.18 MiB/s [2024-11-20T14:31:35.376Z] 5934.38 IOPS, 23.18 MiB/s [2024-11-20T14:31:36.317Z] 5914.89 IOPS, 23.11 MiB/s [2024-11-20T14:31:36.317Z] 5968.60 IOPS, 23.31 MiB/s 00:21:47.357 Latency(us) 00:21:47.357 [2024-11-20T14:31:36.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.357 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:47.357 Verification LBA range: start 0x0 length 0x2000 00:21:47.357 TLSTESTn1 : 10.02 5971.47 23.33 0.00 0.00 21400.00 6034.77 28398.93 00:21:47.357 [2024-11-20T14:31:36.317Z] =================================================================================================================== 00:21:47.357 [2024-11-20T14:31:36.317Z] Total : 5971.47 23.33 0.00 0.00 21400.00 6034.77 28398.93 00:21:47.357 { 00:21:47.357 "results": [ 00:21:47.357 { 00:21:47.357 "job": "TLSTESTn1", 00:21:47.357 "core_mask": "0x4", 00:21:47.357 "workload": "verify", 00:21:47.357 "status": "finished", 00:21:47.357 "verify_range": { 00:21:47.357 "start": 0, 00:21:47.357 "length": 8192 00:21:47.357 }, 00:21:47.357 "queue_depth": 128, 00:21:47.357 "io_size": 4096, 00:21:47.357 "runtime": 10.016455, 00:21:47.357 "iops": 5971.473939632335, 00:21:47.357 "mibps": 23.326070076688808, 00:21:47.357 "io_failed": 0, 00:21:47.357 "io_timeout": 0, 00:21:47.357 "avg_latency_us": 21400.004984423675, 00:21:47.357 "min_latency_us": 6034.7733333333335, 00:21:47.357 "max_latency_us": 28398.933333333334 00:21:47.357 } 00:21:47.357 ], 00:21:47.357 "core_count": 1 00:21:47.357 } 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:47.617 nvmf_trace.0 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 633297 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 633297 ']' 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 633297 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633297 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633297' 00:21:47.617 killing process with pid 633297 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 633297 00:21:47.617 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.617 00:21:47.617 Latency(us) 00:21:47.617 [2024-11-20T14:31:36.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.617 [2024-11-20T14:31:36.577Z] =================================================================================================================== 00:21:47.617 [2024-11-20T14:31:36.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.617 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 633297 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.877 rmmod nvme_tcp 00:21:47.877 rmmod nvme_fabrics 00:21:47.877 rmmod nvme_keyring 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 633053 ']' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 633053 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 633053 ']' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 633053 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633053 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633053' 00:21:47.877 killing process with pid 633053 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 633053 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 633053 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.877 15:31:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xZN 00:21:50.419 00:21:50.419 real 0m22.876s 00:21:50.419 user 0m24.920s 00:21:50.419 sys 0m9.182s 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.419 ************************************ 00:21:50.419 END TEST nvmf_fips 00:21:50.419 ************************************ 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:50.419 ************************************ 00:21:50.419 START TEST nvmf_control_msg_list 00:21:50.419 ************************************ 00:21:50.419 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:50.419 * Looking for test storage... 00:21:50.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.419 --rc genhtml_branch_coverage=1 00:21:50.419 --rc genhtml_function_coverage=1 00:21:50.419 --rc genhtml_legend=1 00:21:50.419 --rc geninfo_all_blocks=1 00:21:50.419 --rc geninfo_unexecuted_blocks=1 00:21:50.419 00:21:50.419 ' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.419 --rc genhtml_branch_coverage=1 00:21:50.419 --rc genhtml_function_coverage=1 00:21:50.419 --rc genhtml_legend=1 00:21:50.419 --rc geninfo_all_blocks=1 00:21:50.419 --rc geninfo_unexecuted_blocks=1 00:21:50.419 00:21:50.419 ' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.419 --rc genhtml_branch_coverage=1 00:21:50.419 --rc genhtml_function_coverage=1 00:21:50.419 --rc genhtml_legend=1 00:21:50.419 --rc geninfo_all_blocks=1 00:21:50.419 --rc geninfo_unexecuted_blocks=1 00:21:50.419 00:21:50.419 ' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:50.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.419 --rc genhtml_branch_coverage=1 00:21:50.419 --rc genhtml_function_coverage=1 00:21:50.419 --rc genhtml_legend=1 00:21:50.419 --rc geninfo_all_blocks=1 00:21:50.419 --rc geninfo_unexecuted_blocks=1 00:21:50.419 00:21:50.419 ' 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.419 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.420 15:31:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:58.559 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:58.559 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.559 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:58.560 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:58.560 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:21:58.560 00:21:58.560 --- 10.0.0.2 ping statistics --- 00:21:58.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.560 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:21:58.560 00:21:58.560 --- 10.0.0.1 ping statistics --- 00:21:58.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.560 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=639746 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 639746 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 639746 ']' 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.560 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.560 [2024-11-20 15:31:46.785984] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:21:58.560 [2024-11-20 15:31:46.786051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.560 [2024-11-20 15:31:46.887322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.560 [2024-11-20 15:31:46.938999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.560 [2024-11-20 15:31:46.939052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.560 [2024-11-20 15:31:46.939060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.560 [2024-11-20 15:31:46.939067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.560 [2024-11-20 15:31:46.939074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.560 [2024-11-20 15:31:46.939837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.822 [2024-11-20 15:31:47.653493] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.822 Malloc0 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:58.822 [2024-11-20 15:31:47.703992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=640006 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=640007 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=640008 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 640006 00:21:58.822 15:31:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:59.083 [2024-11-20 15:31:47.794561] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:59.083 [2024-11-20 15:31:47.804629] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:59.083 [2024-11-20 15:31:47.804936] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:00.026 Initializing NVMe Controllers 00:22:00.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:00.026 Initialization complete. Launching workers. 00:22:00.026 ======================================================== 00:22:00.026 Latency(us) 00:22:00.026 Device Information : IOPS MiB/s Average min max 00:22:00.026 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2476.00 9.67 403.65 147.18 762.66 00:22:00.026 ======================================================== 00:22:00.026 Total : 2476.00 9.67 403.65 147.18 762.66 00:22:00.026 00:22:00.026 Initializing NVMe Controllers 00:22:00.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:00.026 Initialization complete. Launching workers. 00:22:00.026 ======================================================== 00:22:00.026 Latency(us) 00:22:00.026 Device Information : IOPS MiB/s Average min max 00:22:00.026 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40910.37 40842.93 41069.04 00:22:00.026 ======================================================== 00:22:00.026 Total : 25.00 0.10 40910.37 40842.93 41069.04 00:22:00.026 00:22:00.026 Initializing NVMe Controllers 00:22:00.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:00.026 Initialization complete. Launching workers. 00:22:00.026 ======================================================== 00:22:00.026 Latency(us) 00:22:00.026 Device Information : IOPS MiB/s Average min max 00:22:00.026 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40922.85 40832.68 41362.02 00:22:00.026 ======================================================== 00:22:00.026 Total : 25.00 0.10 40922.85 40832.68 41362.02 00:22:00.026 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 640007 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 640008 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:00.026 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:00.026 rmmod nvme_tcp 00:22:00.287 rmmod nvme_fabrics 00:22:00.287 rmmod nvme_keyring 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 639746 ']' 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 639746 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 639746 ']' 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 639746 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639746 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639746' 00:22:00.287 killing process with pid 639746 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 639746 00:22:00.287 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 639746 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.548 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.460 00:22:02.460 real 0m12.374s 00:22:02.460 user 0m8.023s 00:22:02.460 sys 0m6.511s 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:02.460 ************************************ 00:22:02.460 END TEST nvmf_control_msg_list 00:22:02.460 ************************************ 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.460 15:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.720 ************************************ 00:22:02.720 START TEST nvmf_wait_for_buf 00:22:02.720 ************************************ 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:02.720 * Looking for test storage... 00:22:02.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:02.720 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.721 --rc genhtml_branch_coverage=1 00:22:02.721 --rc genhtml_function_coverage=1 00:22:02.721 --rc genhtml_legend=1 00:22:02.721 --rc geninfo_all_blocks=1 00:22:02.721 --rc geninfo_unexecuted_blocks=1 00:22:02.721 00:22:02.721 ' 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.721 --rc genhtml_branch_coverage=1 00:22:02.721 --rc genhtml_function_coverage=1 00:22:02.721 --rc genhtml_legend=1 00:22:02.721 --rc geninfo_all_blocks=1 00:22:02.721 --rc geninfo_unexecuted_blocks=1 00:22:02.721 00:22:02.721 ' 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.721 --rc genhtml_branch_coverage=1 00:22:02.721 --rc genhtml_function_coverage=1 00:22:02.721 --rc genhtml_legend=1 00:22:02.721 --rc geninfo_all_blocks=1 00:22:02.721 --rc geninfo_unexecuted_blocks=1 00:22:02.721 00:22:02.721 ' 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.721 --rc genhtml_branch_coverage=1 00:22:02.721 --rc genhtml_function_coverage=1 00:22:02.721 --rc genhtml_legend=1 00:22:02.721 --rc geninfo_all_blocks=1 00:22:02.721 --rc geninfo_unexecuted_blocks=1 00:22:02.721 00:22:02.721 ' 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.721 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.981 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.982 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.131 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:11.132 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:11.132 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:11.132 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:11.132 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.132 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:22:11.132 00:22:11.132 --- 10.0.0.2 ping statistics --- 00:22:11.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.132 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:22:11.132 00:22:11.132 --- 10.0.0.1 ping statistics --- 00:22:11.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.132 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.132 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=644381 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 644381 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 644381 ']' 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.133 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.133 [2024-11-20 15:31:59.269205] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:22:11.133 [2024-11-20 15:31:59.269272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.133 [2024-11-20 15:31:59.370708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.133 [2024-11-20 15:31:59.422083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.133 [2024-11-20 15:31:59.422136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.133 [2024-11-20 15:31:59.422145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.133 [2024-11-20 15:31:59.422152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.133 [2024-11-20 15:31:59.422167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.133 [2024-11-20 15:31:59.422940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.393 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 Malloc0 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 [2024-11-20 15:32:00.263518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.394 [2024-11-20 15:32:00.299883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.394 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:11.653 [2024-11-20 15:32:00.403903] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:13.035 Initializing NVMe Controllers 00:22:13.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:13.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:13.035 Initialization complete. Launching workers. 00:22:13.035 ======================================================== 00:22:13.035 Latency(us) 00:22:13.035 Device Information : IOPS MiB/s Average min max 00:22:13.035 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 29.00 3.62 146681.12 47874.68 191554.22 00:22:13.035 ======================================================== 00:22:13.035 Total : 29.00 3.62 146681.12 47874.68 191554.22 00:22:13.035 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=438 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 438 -eq 0 ]] 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.035 rmmod nvme_tcp 00:22:13.035 rmmod nvme_fabrics 00:22:13.035 rmmod nvme_keyring 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 644381 ']' 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 644381 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 644381 ']' 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 644381 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.035 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 644381 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 644381' 00:22:13.296 killing process with pid 644381 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 644381 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 644381 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.296 15:32:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.837 00:22:15.837 real 0m12.809s 00:22:15.837 user 0m5.259s 00:22:15.837 sys 0m6.159s 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:15.837 ************************************ 00:22:15.837 END TEST nvmf_wait_for_buf 00:22:15.837 ************************************ 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.837 15:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:23.979 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.979 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:23.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:23.980 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:23.980 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.980 ************************************ 00:22:23.980 START TEST nvmf_perf_adq 00:22:23.980 ************************************ 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:23.980 * Looking for test storage... 00:22:23.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:23.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.980 --rc genhtml_branch_coverage=1 00:22:23.980 --rc genhtml_function_coverage=1 00:22:23.980 --rc genhtml_legend=1 00:22:23.980 --rc geninfo_all_blocks=1 00:22:23.980 --rc geninfo_unexecuted_blocks=1 00:22:23.980 00:22:23.980 ' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:23.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.980 --rc genhtml_branch_coverage=1 00:22:23.980 --rc genhtml_function_coverage=1 00:22:23.980 --rc genhtml_legend=1 00:22:23.980 --rc geninfo_all_blocks=1 00:22:23.980 --rc geninfo_unexecuted_blocks=1 00:22:23.980 00:22:23.980 ' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:23.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.980 --rc genhtml_branch_coverage=1 00:22:23.980 --rc genhtml_function_coverage=1 00:22:23.980 --rc genhtml_legend=1 00:22:23.980 --rc geninfo_all_blocks=1 00:22:23.980 --rc geninfo_unexecuted_blocks=1 00:22:23.980 00:22:23.980 ' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:23.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.980 --rc genhtml_branch_coverage=1 00:22:23.980 --rc genhtml_function_coverage=1 00:22:23.980 --rc genhtml_legend=1 00:22:23.980 --rc geninfo_all_blocks=1 00:22:23.980 --rc geninfo_unexecuted_blocks=1 00:22:23.980 00:22:23.980 ' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.980 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.981 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:30.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:30.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.567 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:30.568 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:30.568 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:30.568 15:32:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:31.953 15:32:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:33.867 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:39.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:39.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:39.154 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.154 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:39.155 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:22:39.155 00:22:39.155 --- 10.0.0.2 ping statistics --- 00:22:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.155 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:22:39.155 00:22:39.155 --- 10.0.0.1 ping statistics --- 00:22:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.155 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.155 15:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=654604 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 654604 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 654604 ']' 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.155 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.155 [2024-11-20 15:32:28.065815] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:22:39.155 [2024-11-20 15:32:28.065882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.416 [2024-11-20 15:32:28.165476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.416 [2024-11-20 15:32:28.221361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.416 [2024-11-20 15:32:28.221413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.416 [2024-11-20 15:32:28.221422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.416 [2024-11-20 15:32:28.221430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.416 [2024-11-20 15:32:28.221438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.416 [2024-11-20 15:32:28.223355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.416 [2024-11-20 15:32:28.223516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.416 [2024-11-20 15:32:28.223713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.416 [2024-11-20 15:32:28.223715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.988 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 [2024-11-20 15:32:29.087453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 Malloc1 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.250 [2024-11-20 15:32:29.163262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=654938 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:40.250 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:42.795 "tick_rate": 2400000000, 00:22:42.795 "poll_groups": [ 00:22:42.795 { 00:22:42.795 "name": "nvmf_tgt_poll_group_000", 00:22:42.795 "admin_qpairs": 1, 00:22:42.795 "io_qpairs": 1, 00:22:42.795 "current_admin_qpairs": 1, 00:22:42.795 "current_io_qpairs": 1, 00:22:42.795 "pending_bdev_io": 0, 00:22:42.795 "completed_nvme_io": 13981, 00:22:42.795 "transports": [ 00:22:42.795 { 00:22:42.795 "trtype": "TCP" 00:22:42.795 } 00:22:42.795 ] 00:22:42.795 }, 00:22:42.795 { 00:22:42.795 "name": "nvmf_tgt_poll_group_001", 00:22:42.795 "admin_qpairs": 0, 00:22:42.795 "io_qpairs": 1, 00:22:42.795 "current_admin_qpairs": 0, 00:22:42.795 "current_io_qpairs": 1, 00:22:42.795 "pending_bdev_io": 0, 00:22:42.795 "completed_nvme_io": 14275, 00:22:42.795 "transports": [ 00:22:42.795 { 00:22:42.795 "trtype": "TCP" 00:22:42.795 } 00:22:42.795 ] 00:22:42.795 }, 00:22:42.795 { 00:22:42.795 "name": "nvmf_tgt_poll_group_002", 00:22:42.795 "admin_qpairs": 0, 00:22:42.795 "io_qpairs": 1, 00:22:42.795 "current_admin_qpairs": 0, 00:22:42.795 "current_io_qpairs": 1, 00:22:42.795 "pending_bdev_io": 0, 00:22:42.795 "completed_nvme_io": 15009, 00:22:42.795 "transports": [ 00:22:42.795 { 00:22:42.795 "trtype": "TCP" 00:22:42.795 } 00:22:42.795 ] 00:22:42.795 }, 00:22:42.795 { 00:22:42.795 "name": "nvmf_tgt_poll_group_003", 00:22:42.795 "admin_qpairs": 0, 00:22:42.795 "io_qpairs": 1, 00:22:42.795 "current_admin_qpairs": 0, 00:22:42.795 "current_io_qpairs": 1, 00:22:42.795 "pending_bdev_io": 0, 00:22:42.795 "completed_nvme_io": 14076, 00:22:42.795 "transports": [ 00:22:42.795 { 00:22:42.795 "trtype": "TCP" 00:22:42.795 } 00:22:42.795 ] 00:22:42.795 } 00:22:42.795 ] 00:22:42.795 }' 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:42.795 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 654938 00:22:50.924 Initializing NVMe Controllers 00:22:50.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.924 Initialization complete. Launching workers. 00:22:50.924 ======================================================== 00:22:50.925 Latency(us) 00:22:50.925 Device Information : IOPS MiB/s Average min max 00:22:50.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12393.90 48.41 5164.58 1421.35 14085.10 00:22:50.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12775.50 49.90 5009.43 1379.94 11893.04 00:22:50.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13314.00 52.01 4806.48 1335.79 13631.75 00:22:50.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12418.20 48.51 5153.59 1261.60 13721.21 00:22:50.925 ======================================================== 00:22:50.925 Total : 50901.59 198.83 5029.29 1261.60 14085.10 00:22:50.925 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.925 rmmod nvme_tcp 00:22:50.925 rmmod nvme_fabrics 00:22:50.925 rmmod nvme_keyring 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 654604 ']' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 654604 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 654604 ']' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 654604 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654604 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654604' 00:22:50.925 killing process with pid 654604 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 654604 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 654604 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.925 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.466 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:53.466 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:53.466 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:53.466 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:54.405 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:57.084 15:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:02.374 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:02.374 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:02.375 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:02.375 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:02.375 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:23:02.375 00:23:02.375 --- 10.0.0.2 ping statistics --- 00:23:02.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.375 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:02.375 00:23:02.375 --- 10.0.0.1 ping statistics --- 00:23:02.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.375 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.375 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:02.376 net.core.busy_poll = 1 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:02.376 net.core.busy_read = 1 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:02.376 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=659506 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 659506 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 659506 ']' 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.376 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.376 [2024-11-20 15:32:51.137743] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:02.376 [2024-11-20 15:32:51.137812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.376 [2024-11-20 15:32:51.242169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.376 [2024-11-20 15:32:51.295827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.376 [2024-11-20 15:32:51.295878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.376 [2024-11-20 15:32:51.295887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.376 [2024-11-20 15:32:51.295895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.376 [2024-11-20 15:32:51.295901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.376 [2024-11-20 15:32:51.297877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.376 [2024-11-20 15:32:51.298019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.376 [2024-11-20 15:32:51.298244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.376 [2024-11-20 15:32:51.298246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.336 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.336 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:03.336 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.336 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.336 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 [2024-11-20 15:32:52.163106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 Malloc1 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.336 [2024-11-20 15:32:52.239329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=659771 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:03.336 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:05.883 "tick_rate": 2400000000, 00:23:05.883 "poll_groups": [ 00:23:05.883 { 00:23:05.883 "name": "nvmf_tgt_poll_group_000", 00:23:05.883 "admin_qpairs": 1, 00:23:05.883 "io_qpairs": 4, 00:23:05.883 "current_admin_qpairs": 1, 00:23:05.883 "current_io_qpairs": 4, 00:23:05.883 "pending_bdev_io": 0, 00:23:05.883 "completed_nvme_io": 42305, 00:23:05.883 "transports": [ 00:23:05.883 { 00:23:05.883 "trtype": "TCP" 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "name": "nvmf_tgt_poll_group_001", 00:23:05.883 "admin_qpairs": 0, 00:23:05.883 "io_qpairs": 0, 00:23:05.883 "current_admin_qpairs": 0, 00:23:05.883 "current_io_qpairs": 0, 00:23:05.883 "pending_bdev_io": 0, 00:23:05.883 "completed_nvme_io": 0, 00:23:05.883 "transports": [ 00:23:05.883 { 00:23:05.883 "trtype": "TCP" 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "name": "nvmf_tgt_poll_group_002", 00:23:05.883 "admin_qpairs": 0, 00:23:05.883 "io_qpairs": 0, 00:23:05.883 "current_admin_qpairs": 0, 00:23:05.883 "current_io_qpairs": 0, 00:23:05.883 "pending_bdev_io": 0, 00:23:05.883 "completed_nvme_io": 0, 00:23:05.883 "transports": [ 00:23:05.883 { 00:23:05.883 "trtype": "TCP" 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 }, 00:23:05.883 { 00:23:05.883 "name": "nvmf_tgt_poll_group_003", 00:23:05.883 "admin_qpairs": 0, 00:23:05.883 "io_qpairs": 0, 00:23:05.883 "current_admin_qpairs": 0, 00:23:05.883 "current_io_qpairs": 0, 00:23:05.883 "pending_bdev_io": 0, 00:23:05.883 "completed_nvme_io": 0, 00:23:05.883 "transports": [ 00:23:05.883 { 00:23:05.883 "trtype": "TCP" 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 } 00:23:05.883 ] 00:23:05.883 }' 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:23:05.883 15:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 659771 00:23:14.020 Initializing NVMe Controllers 00:23:14.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:14.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:14.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:14.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:14.020 Initialization complete. Launching workers. 00:23:14.020 ======================================================== 00:23:14.020 Latency(us) 00:23:14.020 Device Information : IOPS MiB/s Average min max 00:23:14.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6055.50 23.65 10571.15 1225.59 58908.62 00:23:14.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6316.30 24.67 10175.86 1292.53 56267.96 00:23:14.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7220.90 28.21 8863.96 1083.60 60771.85 00:23:14.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6128.10 23.94 10476.80 1399.25 59273.18 00:23:14.020 ======================================================== 00:23:14.020 Total : 25720.79 100.47 9972.32 1083.60 60771.85 00:23:14.020 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.020 rmmod nvme_tcp 00:23:14.020 rmmod nvme_fabrics 00:23:14.020 rmmod nvme_keyring 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 659506 ']' 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 659506 00:23:14.020 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 659506 ']' 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 659506 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659506 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659506' 00:23:14.021 killing process with pid 659506 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 659506 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 659506 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.021 15:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:17.322 00:23:17.322 real 0m54.284s 00:23:17.322 user 2m51.158s 00:23:17.322 sys 0m11.138s 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.322 ************************************ 00:23:17.322 END TEST nvmf_perf_adq 00:23:17.322 ************************************ 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.322 ************************************ 00:23:17.322 START TEST nvmf_shutdown 00:23:17.322 ************************************ 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.322 * Looking for test storage... 00:23:17.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.322 15:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.322 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.322 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.322 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.322 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.322 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.322 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.323 --rc genhtml_branch_coverage=1 00:23:17.323 --rc genhtml_function_coverage=1 00:23:17.323 --rc genhtml_legend=1 00:23:17.323 --rc geninfo_all_blocks=1 00:23:17.323 --rc geninfo_unexecuted_blocks=1 00:23:17.323 00:23:17.323 ' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.323 --rc genhtml_branch_coverage=1 00:23:17.323 --rc genhtml_function_coverage=1 00:23:17.323 --rc genhtml_legend=1 00:23:17.323 --rc geninfo_all_blocks=1 00:23:17.323 --rc geninfo_unexecuted_blocks=1 00:23:17.323 00:23:17.323 ' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.323 --rc genhtml_branch_coverage=1 00:23:17.323 --rc genhtml_function_coverage=1 00:23:17.323 --rc genhtml_legend=1 00:23:17.323 --rc geninfo_all_blocks=1 00:23:17.323 --rc geninfo_unexecuted_blocks=1 00:23:17.323 00:23:17.323 ' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.323 --rc genhtml_branch_coverage=1 00:23:17.323 --rc genhtml_function_coverage=1 00:23:17.323 --rc genhtml_legend=1 00:23:17.323 --rc geninfo_all_blocks=1 00:23:17.323 --rc geninfo_unexecuted_blocks=1 00:23:17.323 00:23:17.323 ' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.323 ************************************ 00:23:17.323 START TEST nvmf_shutdown_tc1 00:23:17.323 ************************************ 00:23:17.323 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.324 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:25.467 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.468 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:23:25.469 00:23:25.469 --- 10.0.0.2 ping statistics --- 00:23:25.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.469 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:25.469 00:23:25.469 --- 10.0.0.1 ping statistics --- 00:23:25.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.469 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=666799 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 666799 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 666799 ']' 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.469 15:33:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.469 [2024-11-20 15:33:13.857201] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:25.469 [2024-11-20 15:33:13.857273] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.469 [2024-11-20 15:33:13.960035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.469 [2024-11-20 15:33:14.012780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.469 [2024-11-20 15:33:14.012827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.469 [2024-11-20 15:33:14.012836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.469 [2024-11-20 15:33:14.012843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.469 [2024-11-20 15:33:14.012850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.469 [2024-11-20 15:33:14.015196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.469 [2024-11-20 15:33:14.015298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.469 [2024-11-20 15:33:14.015463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.469 [2024-11-20 15:33:14.015463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.730 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.730 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:25.730 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.730 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.730 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.991 [2024-11-20 15:33:14.733049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.991 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.992 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.992 Malloc1 00:23:25.992 [2024-11-20 15:33:14.857816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.992 Malloc2 00:23:25.992 Malloc3 00:23:26.252 Malloc4 00:23:26.252 Malloc5 00:23:26.252 Malloc6 00:23:26.252 Malloc7 00:23:26.252 Malloc8 00:23:26.514 Malloc9 00:23:26.514 Malloc10 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=667178 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 667178 /var/tmp/bdevperf.sock 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 667178 ']' 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.514 { 00:23:26.514 "params": { 00:23:26.514 "name": "Nvme$subsystem", 00:23:26.514 "trtype": "$TEST_TRANSPORT", 00:23:26.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.514 "adrfam": "ipv4", 00:23:26.514 "trsvcid": "$NVMF_PORT", 00:23:26.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.514 "hdgst": ${hdgst:-false}, 00:23:26.514 "ddgst": ${ddgst:-false} 00:23:26.514 }, 00:23:26.514 "method": "bdev_nvme_attach_controller" 00:23:26.514 } 00:23:26.514 EOF 00:23:26.514 )") 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.514 { 00:23:26.514 "params": { 00:23:26.514 "name": "Nvme$subsystem", 00:23:26.514 "trtype": "$TEST_TRANSPORT", 00:23:26.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.514 "adrfam": "ipv4", 00:23:26.514 "trsvcid": "$NVMF_PORT", 00:23:26.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.514 "hdgst": ${hdgst:-false}, 00:23:26.514 "ddgst": ${ddgst:-false} 00:23:26.514 }, 00:23:26.514 "method": "bdev_nvme_attach_controller" 00:23:26.514 } 00:23:26.514 EOF 00:23:26.514 )") 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.514 { 00:23:26.514 "params": { 00:23:26.514 "name": "Nvme$subsystem", 00:23:26.514 "trtype": "$TEST_TRANSPORT", 00:23:26.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.514 "adrfam": "ipv4", 00:23:26.514 "trsvcid": "$NVMF_PORT", 00:23:26.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.514 "hdgst": ${hdgst:-false}, 00:23:26.514 "ddgst": ${ddgst:-false} 00:23:26.514 }, 00:23:26.514 "method": "bdev_nvme_attach_controller" 00:23:26.514 } 00:23:26.514 EOF 00:23:26.514 )") 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.514 { 00:23:26.514 "params": { 00:23:26.514 "name": "Nvme$subsystem", 00:23:26.514 "trtype": "$TEST_TRANSPORT", 00:23:26.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.514 "adrfam": "ipv4", 00:23:26.514 "trsvcid": "$NVMF_PORT", 00:23:26.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.514 "hdgst": ${hdgst:-false}, 00:23:26.514 "ddgst": ${ddgst:-false} 00:23:26.514 }, 00:23:26.514 "method": "bdev_nvme_attach_controller" 00:23:26.514 } 00:23:26.514 EOF 00:23:26.514 )") 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.514 { 00:23:26.514 "params": { 00:23:26.514 "name": "Nvme$subsystem", 00:23:26.514 "trtype": "$TEST_TRANSPORT", 00:23:26.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.514 "adrfam": "ipv4", 00:23:26.514 "trsvcid": "$NVMF_PORT", 00:23:26.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.514 "hdgst": ${hdgst:-false}, 00:23:26.514 "ddgst": ${ddgst:-false} 00:23:26.514 }, 00:23:26.514 "method": "bdev_nvme_attach_controller" 00:23:26.514 } 00:23:26.514 EOF 00:23:26.514 )") 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.514 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.514 { 00:23:26.514 "params": { 00:23:26.514 "name": "Nvme$subsystem", 00:23:26.514 "trtype": "$TEST_TRANSPORT", 00:23:26.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.514 "adrfam": "ipv4", 00:23:26.514 "trsvcid": "$NVMF_PORT", 00:23:26.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.514 "hdgst": ${hdgst:-false}, 00:23:26.514 "ddgst": ${ddgst:-false} 00:23:26.514 }, 00:23:26.514 "method": "bdev_nvme_attach_controller" 00:23:26.514 } 00:23:26.514 EOF 00:23:26.514 )") 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.515 [2024-11-20 15:33:15.375774] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:26.515 [2024-11-20 15:33:15.375848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.515 { 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme$subsystem", 00:23:26.515 "trtype": "$TEST_TRANSPORT", 00:23:26.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "$NVMF_PORT", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.515 "hdgst": ${hdgst:-false}, 00:23:26.515 "ddgst": ${ddgst:-false} 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 } 00:23:26.515 EOF 00:23:26.515 )") 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.515 { 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme$subsystem", 00:23:26.515 "trtype": "$TEST_TRANSPORT", 00:23:26.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "$NVMF_PORT", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.515 "hdgst": ${hdgst:-false}, 00:23:26.515 "ddgst": ${ddgst:-false} 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 } 00:23:26.515 EOF 00:23:26.515 )") 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.515 { 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme$subsystem", 00:23:26.515 "trtype": "$TEST_TRANSPORT", 00:23:26.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "$NVMF_PORT", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.515 "hdgst": ${hdgst:-false}, 00:23:26.515 "ddgst": ${ddgst:-false} 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 } 00:23:26.515 EOF 00:23:26.515 )") 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.515 { 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme$subsystem", 00:23:26.515 "trtype": "$TEST_TRANSPORT", 00:23:26.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "$NVMF_PORT", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.515 "hdgst": ${hdgst:-false}, 00:23:26.515 "ddgst": ${ddgst:-false} 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 } 00:23:26.515 EOF 00:23:26.515 )") 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:26.515 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme1", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme2", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme3", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme4", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme5", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme6", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme7", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme8", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme9", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 },{ 00:23:26.515 "params": { 00:23:26.515 "name": "Nvme10", 00:23:26.515 "trtype": "tcp", 00:23:26.515 "traddr": "10.0.0.2", 00:23:26.515 "adrfam": "ipv4", 00:23:26.515 "trsvcid": "4420", 00:23:26.515 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.515 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.515 "hdgst": false, 00:23:26.515 "ddgst": false 00:23:26.515 }, 00:23:26.515 "method": "bdev_nvme_attach_controller" 00:23:26.515 }' 00:23:26.777 [2024-11-20 15:33:15.473522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.777 [2024-11-20 15:33:15.542710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 667178 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:28.161 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:29.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 667178 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 666799 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.105 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.105 { 00:23:29.105 "params": { 00:23:29.105 "name": "Nvme$subsystem", 00:23:29.105 "trtype": "$TEST_TRANSPORT", 00:23:29.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.105 "adrfam": "ipv4", 00:23:29.105 "trsvcid": "$NVMF_PORT", 00:23:29.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.105 "hdgst": ${hdgst:-false}, 00:23:29.105 "ddgst": ${ddgst:-false} 00:23:29.105 }, 00:23:29.105 "method": "bdev_nvme_attach_controller" 00:23:29.105 } 00:23:29.105 EOF 00:23:29.105 )") 00:23:29.105 [2024-11-20 15:33:17.863216] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:29.106 [2024-11-20 15:33:17.863270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667665 ] 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.106 { 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme$subsystem", 00:23:29.106 "trtype": "$TEST_TRANSPORT", 00:23:29.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "$NVMF_PORT", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.106 "hdgst": ${hdgst:-false}, 00:23:29.106 "ddgst": ${ddgst:-false} 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 } 00:23:29.106 EOF 00:23:29.106 )") 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.106 { 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme$subsystem", 00:23:29.106 "trtype": "$TEST_TRANSPORT", 00:23:29.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "$NVMF_PORT", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.106 "hdgst": ${hdgst:-false}, 00:23:29.106 "ddgst": ${ddgst:-false} 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 } 00:23:29.106 EOF 00:23:29.106 )") 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.106 { 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme$subsystem", 00:23:29.106 "trtype": "$TEST_TRANSPORT", 00:23:29.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "$NVMF_PORT", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.106 "hdgst": ${hdgst:-false}, 00:23:29.106 "ddgst": ${ddgst:-false} 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 } 00:23:29.106 EOF 00:23:29.106 )") 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:29.106 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme1", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme2", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme3", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme4", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme5", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme6", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme7", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.106 "hdgst": false, 00:23:29.106 "ddgst": false 00:23:29.106 }, 00:23:29.106 "method": "bdev_nvme_attach_controller" 00:23:29.106 },{ 00:23:29.106 "params": { 00:23:29.106 "name": "Nvme8", 00:23:29.106 "trtype": "tcp", 00:23:29.106 "traddr": "10.0.0.2", 00:23:29.106 "adrfam": "ipv4", 00:23:29.106 "trsvcid": "4420", 00:23:29.106 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.106 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.107 "hdgst": false, 00:23:29.107 "ddgst": false 00:23:29.107 }, 00:23:29.107 "method": "bdev_nvme_attach_controller" 00:23:29.107 },{ 00:23:29.107 "params": { 00:23:29.107 "name": "Nvme9", 00:23:29.107 "trtype": "tcp", 00:23:29.107 "traddr": "10.0.0.2", 00:23:29.107 "adrfam": "ipv4", 00:23:29.107 "trsvcid": "4420", 00:23:29.107 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.107 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.107 "hdgst": false, 00:23:29.107 "ddgst": false 00:23:29.107 }, 00:23:29.107 "method": "bdev_nvme_attach_controller" 00:23:29.107 },{ 00:23:29.107 "params": { 00:23:29.107 "name": "Nvme10", 00:23:29.107 "trtype": "tcp", 00:23:29.107 "traddr": "10.0.0.2", 00:23:29.107 "adrfam": "ipv4", 00:23:29.107 "trsvcid": "4420", 00:23:29.107 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.107 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.107 "hdgst": false, 00:23:29.107 "ddgst": false 00:23:29.107 }, 00:23:29.107 "method": "bdev_nvme_attach_controller" 00:23:29.107 }' 00:23:29.107 [2024-11-20 15:33:17.947121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.107 [2024-11-20 15:33:17.976776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.488 Running I/O for 1 seconds... 00:23:31.690 2829.00 IOPS, 176.81 MiB/s 00:23:31.690 Latency(us) 00:23:31.690 [2024-11-20T14:33:20.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.690 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme1n1 : 1.13 340.87 21.30 0.00 0.00 185892.12 13052.59 173888.85 00:23:31.691 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme2n1 : 1.09 292.28 18.27 0.00 0.00 213707.26 16274.77 191365.12 00:23:31.691 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme3n1 : 1.13 340.18 21.26 0.00 0.00 181569.85 13707.95 206219.95 00:23:31.691 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme4n1 : 1.10 348.71 21.79 0.00 0.00 174797.30 12615.68 173015.04 00:23:31.691 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme5n1 : 1.14 336.98 21.06 0.00 0.00 178965.33 13981.01 176510.29 00:23:31.691 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme6n1 : 1.10 353.23 22.08 0.00 0.00 167344.22 9666.56 185248.43 00:23:31.691 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme7n1 : 1.13 338.72 21.17 0.00 0.00 173554.49 13598.72 186996.05 00:23:31.691 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme8n1 : 1.12 345.10 21.57 0.00 0.00 165682.25 9939.63 168645.97 00:23:31.691 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme9n1 : 1.15 334.30 20.89 0.00 0.00 171659.73 11796.48 200977.07 00:23:31.691 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.691 Verification LBA range: start 0x0 length 0x400 00:23:31.691 Nvme10n1 : 1.14 340.61 21.29 0.00 0.00 165869.12 1051.31 185248.43 00:23:31.691 [2024-11-20T14:33:20.651Z] =================================================================================================================== 00:23:31.691 [2024-11-20T14:33:20.651Z] Total : 3370.99 210.69 0.00 0.00 177263.32 1051.31 206219.95 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.691 rmmod nvme_tcp 00:23:31.691 rmmod nvme_fabrics 00:23:31.691 rmmod nvme_keyring 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 666799 ']' 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 666799 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 666799 ']' 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 666799 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.691 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666799 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666799' 00:23:31.951 killing process with pid 666799 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 666799 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 666799 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.951 15:33:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.499 15:33:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.499 00:23:34.499 real 0m16.805s 00:23:34.499 user 0m33.228s 00:23:34.499 sys 0m7.118s 00:23:34.499 15:33:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.499 15:33:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.499 ************************************ 00:23:34.499 END TEST nvmf_shutdown_tc1 00:23:34.499 ************************************ 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.499 ************************************ 00:23:34.499 START TEST nvmf_shutdown_tc2 00:23:34.499 ************************************ 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.499 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.500 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.500 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.500 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.500 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.500 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:23:34.501 00:23:34.501 --- 10.0.0.2 ping statistics --- 00:23:34.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.501 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:23:34.501 00:23:34.501 --- 10.0.0.1 ping statistics --- 00:23:34.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.501 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=668903 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 668903 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 668903 ']' 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.501 15:33:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.762 [2024-11-20 15:33:23.492806] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:34.762 [2024-11-20 15:33:23.492873] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.762 [2024-11-20 15:33:23.587272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.762 [2024-11-20 15:33:23.621572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.762 [2024-11-20 15:33:23.621602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.762 [2024-11-20 15:33:23.621607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.762 [2024-11-20 15:33:23.621612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.762 [2024-11-20 15:33:23.621616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.762 [2024-11-20 15:33:23.622927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.762 [2024-11-20 15:33:23.623081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.762 [2024-11-20 15:33:23.623219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.762 [2024-11-20 15:33:23.623221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:35.333 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.333 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:35.333 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.333 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.333 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.595 [2024-11-20 15:33:24.337595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.595 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.596 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:35.596 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:35.596 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.596 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.596 Malloc1 00:23:35.596 [2024-11-20 15:33:24.448243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.596 Malloc2 00:23:35.596 Malloc3 00:23:35.596 Malloc4 00:23:35.856 Malloc5 00:23:35.856 Malloc6 00:23:35.856 Malloc7 00:23:35.856 Malloc8 00:23:35.856 Malloc9 00:23:35.856 Malloc10 00:23:35.856 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.856 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:35.856 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.856 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=669125 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 669125 /var/tmp/bdevperf.sock 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 669125 ']' 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.117 { 00:23:36.117 "params": { 00:23:36.117 "name": "Nvme$subsystem", 00:23:36.117 "trtype": "$TEST_TRANSPORT", 00:23:36.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.117 "adrfam": "ipv4", 00:23:36.117 "trsvcid": "$NVMF_PORT", 00:23:36.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.117 "hdgst": ${hdgst:-false}, 00:23:36.117 "ddgst": ${ddgst:-false} 00:23:36.117 }, 00:23:36.117 "method": "bdev_nvme_attach_controller" 00:23:36.117 } 00:23:36.117 EOF 00:23:36.117 )") 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.117 { 00:23:36.117 "params": { 00:23:36.117 "name": "Nvme$subsystem", 00:23:36.117 "trtype": "$TEST_TRANSPORT", 00:23:36.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.117 "adrfam": "ipv4", 00:23:36.117 "trsvcid": "$NVMF_PORT", 00:23:36.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.117 "hdgst": ${hdgst:-false}, 00:23:36.117 "ddgst": ${ddgst:-false} 00:23:36.117 }, 00:23:36.117 "method": "bdev_nvme_attach_controller" 00:23:36.117 } 00:23:36.117 EOF 00:23:36.117 )") 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.117 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.117 { 00:23:36.117 "params": { 00:23:36.117 "name": "Nvme$subsystem", 00:23:36.117 "trtype": "$TEST_TRANSPORT", 00:23:36.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.117 "adrfam": "ipv4", 00:23:36.117 "trsvcid": "$NVMF_PORT", 00:23:36.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.117 "hdgst": ${hdgst:-false}, 00:23:36.117 "ddgst": ${ddgst:-false} 00:23:36.117 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 [2024-11-20 15:33:24.895143] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:36.118 [2024-11-20 15:33:24.895200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669125 ] 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.118 { 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme$subsystem", 00:23:36.118 "trtype": "$TEST_TRANSPORT", 00:23:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "$NVMF_PORT", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.118 "hdgst": ${hdgst:-false}, 00:23:36.118 "ddgst": ${ddgst:-false} 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 } 00:23:36.118 EOF 00:23:36.118 )") 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:36.118 15:33:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme1", 00:23:36.118 "trtype": "tcp", 00:23:36.118 "traddr": "10.0.0.2", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "4420", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.118 "hdgst": false, 00:23:36.118 "ddgst": false 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 },{ 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme2", 00:23:36.118 "trtype": "tcp", 00:23:36.118 "traddr": "10.0.0.2", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "4420", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.118 "hdgst": false, 00:23:36.118 "ddgst": false 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 },{ 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme3", 00:23:36.118 "trtype": "tcp", 00:23:36.118 "traddr": "10.0.0.2", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "4420", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.118 "hdgst": false, 00:23:36.118 "ddgst": false 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 },{ 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme4", 00:23:36.118 "trtype": "tcp", 00:23:36.118 "traddr": "10.0.0.2", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "4420", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.118 "hdgst": false, 00:23:36.118 "ddgst": false 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.118 },{ 00:23:36.118 "params": { 00:23:36.118 "name": "Nvme5", 00:23:36.118 "trtype": "tcp", 00:23:36.118 "traddr": "10.0.0.2", 00:23:36.118 "adrfam": "ipv4", 00:23:36.118 "trsvcid": "4420", 00:23:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.118 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.118 "hdgst": false, 00:23:36.118 "ddgst": false 00:23:36.118 }, 00:23:36.118 "method": "bdev_nvme_attach_controller" 00:23:36.119 },{ 00:23:36.119 "params": { 00:23:36.119 "name": "Nvme6", 00:23:36.119 "trtype": "tcp", 00:23:36.119 "traddr": "10.0.0.2", 00:23:36.119 "adrfam": "ipv4", 00:23:36.119 "trsvcid": "4420", 00:23:36.119 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.119 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.119 "hdgst": false, 00:23:36.119 "ddgst": false 00:23:36.119 }, 00:23:36.119 "method": "bdev_nvme_attach_controller" 00:23:36.119 },{ 00:23:36.119 "params": { 00:23:36.119 "name": "Nvme7", 00:23:36.119 "trtype": "tcp", 00:23:36.119 "traddr": "10.0.0.2", 00:23:36.119 "adrfam": "ipv4", 00:23:36.119 "trsvcid": "4420", 00:23:36.119 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.119 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.119 "hdgst": false, 00:23:36.119 "ddgst": false 00:23:36.119 }, 00:23:36.119 "method": "bdev_nvme_attach_controller" 00:23:36.119 },{ 00:23:36.119 "params": { 00:23:36.119 "name": "Nvme8", 00:23:36.119 "trtype": "tcp", 00:23:36.119 "traddr": "10.0.0.2", 00:23:36.119 "adrfam": "ipv4", 00:23:36.119 "trsvcid": "4420", 00:23:36.119 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.119 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.119 "hdgst": false, 00:23:36.119 "ddgst": false 00:23:36.119 }, 00:23:36.119 "method": "bdev_nvme_attach_controller" 00:23:36.119 },{ 00:23:36.119 "params": { 00:23:36.119 "name": "Nvme9", 00:23:36.119 "trtype": "tcp", 00:23:36.119 "traddr": "10.0.0.2", 00:23:36.119 "adrfam": "ipv4", 00:23:36.119 "trsvcid": "4420", 00:23:36.119 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.119 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.119 "hdgst": false, 00:23:36.119 "ddgst": false 00:23:36.119 }, 00:23:36.119 "method": "bdev_nvme_attach_controller" 00:23:36.119 },{ 00:23:36.119 "params": { 00:23:36.119 "name": "Nvme10", 00:23:36.119 "trtype": "tcp", 00:23:36.119 "traddr": "10.0.0.2", 00:23:36.119 "adrfam": "ipv4", 00:23:36.119 "trsvcid": "4420", 00:23:36.119 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.119 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.119 "hdgst": false, 00:23:36.119 "ddgst": false 00:23:36.119 }, 00:23:36.119 "method": "bdev_nvme_attach_controller" 00:23:36.119 }' 00:23:36.119 [2024-11-20 15:33:24.980958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.119 [2024-11-20 15:33:25.011143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.506 Running I/O for 10 seconds... 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.506 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.767 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.767 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:37.767 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:37.767 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:38.028 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 669125 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 669125 ']' 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 669125 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 669125 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 669125' 00:23:38.289 killing process with pid 669125 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 669125 00:23:38.289 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 669125 00:23:38.550 Received shutdown signal, test time was about 0.957424 seconds 00:23:38.550 00:23:38.550 Latency(us) 00:23:38.550 [2024-11-20T14:33:27.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.550 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme1n1 : 0.93 275.33 17.21 0.00 0.00 229518.72 18896.21 248162.99 00:23:38.550 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme2n1 : 0.90 214.30 13.39 0.00 0.00 289968.64 16384.00 249910.61 00:23:38.550 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme3n1 : 0.96 271.82 16.99 0.00 0.00 216121.97 14417.92 246415.36 00:23:38.550 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme4n1 : 0.90 284.34 17.77 0.00 0.00 211622.61 12888.75 249910.61 00:23:38.550 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme5n1 : 0.93 274.65 17.17 0.00 0.00 216444.59 18677.76 242920.11 00:23:38.550 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme6n1 : 0.92 278.17 17.39 0.00 0.00 210219.09 18240.85 263891.63 00:23:38.550 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme7n1 : 0.93 276.50 17.28 0.00 0.00 208299.09 20425.39 244667.73 00:23:38.550 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme8n1 : 0.92 282.20 17.64 0.00 0.00 199894.99 4532.91 249910.61 00:23:38.550 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme9n1 : 0.91 211.88 13.24 0.00 0.00 262417.35 22609.92 248162.99 00:23:38.550 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.550 Verification LBA range: start 0x0 length 0x400 00:23:38.550 Nvme10n1 : 0.91 215.63 13.48 0.00 0.00 252170.66 5980.16 260396.37 00:23:38.550 [2024-11-20T14:33:27.510Z] =================================================================================================================== 00:23:38.550 [2024-11-20T14:33:27.510Z] Total : 2584.81 161.55 0.00 0.00 226525.47 4532.91 263891.63 00:23:38.550 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 668903 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.493 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.493 rmmod nvme_tcp 00:23:39.753 rmmod nvme_fabrics 00:23:39.753 rmmod nvme_keyring 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 668903 ']' 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 668903 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 668903 ']' 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 668903 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668903 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668903' 00:23:39.753 killing process with pid 668903 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 668903 00:23:39.753 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 668903 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.015 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.930 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.930 00:23:41.930 real 0m7.826s 00:23:41.930 user 0m23.443s 00:23:41.930 sys 0m1.306s 00:23:41.930 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.930 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:41.930 ************************************ 00:23:41.930 END TEST nvmf_shutdown_tc2 00:23:41.930 ************************************ 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.192 ************************************ 00:23:42.192 START TEST nvmf_shutdown_tc3 00:23:42.192 ************************************ 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:42.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.193 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.193 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.193 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.193 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.193 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.194 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.194 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.194 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.194 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.194 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.194 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:23:42.456 00:23:42.456 --- 10.0.0.2 ping statistics --- 00:23:42.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.456 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:23:42.456 00:23:42.456 --- 10.0.0.1 ping statistics --- 00:23:42.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.456 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=670527 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 670527 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 670527 ']' 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.456 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.456 [2024-11-20 15:33:31.400082] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:42.456 [2024-11-20 15:33:31.400147] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.717 [2024-11-20 15:33:31.494125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.717 [2024-11-20 15:33:31.528618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.717 [2024-11-20 15:33:31.528647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.717 [2024-11-20 15:33:31.528653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.717 [2024-11-20 15:33:31.528658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.717 [2024-11-20 15:33:31.528662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.717 [2024-11-20 15:33:31.529972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.717 [2024-11-20 15:33:31.530126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.717 [2024-11-20 15:33:31.530281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:42.717 [2024-11-20 15:33:31.530382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.290 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.552 [2024-11-20 15:33:32.253429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.552 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.552 Malloc1 00:23:43.552 [2024-11-20 15:33:32.367902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.552 Malloc2 00:23:43.552 Malloc3 00:23:43.552 Malloc4 00:23:43.552 Malloc5 00:23:43.814 Malloc6 00:23:43.814 Malloc7 00:23:43.814 Malloc8 00:23:43.814 Malloc9 00:23:43.814 Malloc10 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=670910 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 670910 /var/tmp/bdevperf.sock 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 670910 ']' 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.814 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.814 { 00:23:43.814 "params": { 00:23:43.814 "name": "Nvme$subsystem", 00:23:43.814 "trtype": "$TEST_TRANSPORT", 00:23:43.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.814 "adrfam": "ipv4", 00:23:43.814 "trsvcid": "$NVMF_PORT", 00:23:43.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.815 "hdgst": ${hdgst:-false}, 00:23:43.815 "ddgst": ${ddgst:-false} 00:23:43.815 }, 00:23:43.815 "method": "bdev_nvme_attach_controller" 00:23:43.815 } 00:23:43.815 EOF 00:23:43.815 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 [2024-11-20 15:33:32.818734] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:44.076 [2024-11-20 15:33:32.818787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670910 ] 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.076 { 00:23:44.076 "params": { 00:23:44.076 "name": "Nvme$subsystem", 00:23:44.076 "trtype": "$TEST_TRANSPORT", 00:23:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.076 "adrfam": "ipv4", 00:23:44.076 "trsvcid": "$NVMF_PORT", 00:23:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.076 "hdgst": ${hdgst:-false}, 00:23:44.076 "ddgst": ${ddgst:-false} 00:23:44.076 }, 00:23:44.076 "method": "bdev_nvme_attach_controller" 00:23:44.076 } 00:23:44.076 EOF 00:23:44.076 )") 00:23:44.076 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.077 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.077 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.077 { 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme$subsystem", 00:23:44.077 "trtype": "$TEST_TRANSPORT", 00:23:44.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "$NVMF_PORT", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.077 "hdgst": ${hdgst:-false}, 00:23:44.077 "ddgst": ${ddgst:-false} 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 } 00:23:44.077 EOF 00:23:44.077 )") 00:23:44.077 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.077 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:44.077 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:44.077 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme1", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme2", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme3", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme4", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme5", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme6", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme7", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme8", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme9", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 },{ 00:23:44.077 "params": { 00:23:44.077 "name": "Nvme10", 00:23:44.077 "trtype": "tcp", 00:23:44.077 "traddr": "10.0.0.2", 00:23:44.077 "adrfam": "ipv4", 00:23:44.077 "trsvcid": "4420", 00:23:44.077 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:44.077 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:44.077 "hdgst": false, 00:23:44.077 "ddgst": false 00:23:44.077 }, 00:23:44.077 "method": "bdev_nvme_attach_controller" 00:23:44.077 }' 00:23:44.077 [2024-11-20 15:33:32.907822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.077 [2024-11-20 15:33:32.943828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.462 Running I/O for 10 seconds... 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:45.462 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.463 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.463 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.463 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.463 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.463 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.724 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.985 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.985 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=72 00:23:45.985 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:23:45.985 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:46.265 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:46.265 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:46.265 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.265 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.265 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.265 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=199 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 670527 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 670527 ']' 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 670527 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670527 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670527' 00:23:46.265 killing process with pid 670527 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 670527 00:23:46.265 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 670527 00:23:46.265 [2024-11-20 15:33:35.103641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.265 [2024-11-20 15:33:35.103757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.103999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.104004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.104009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.104014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.104018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a810 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.109997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.266 [2024-11-20 15:33:35.110037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.110235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a89f0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.112987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ad00 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.267 [2024-11-20 15:33:35.114235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.114415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b1d0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.268 [2024-11-20 15:33:35.115558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.115683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b6c0 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.269 [2024-11-20 15:33:35.116437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.116519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bb90 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.270 [2024-11-20 15:33:35.117448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.117528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c060 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.118572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c530 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.119084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ca20 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.119098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ca20 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.119260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.119274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.119280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.271 [2024-11-20 15:33:35.119285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.119572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8520 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.125230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.272 [2024-11-20 15:33:35.125267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.272 [2024-11-20 15:33:35.125278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.272 [2024-11-20 15:33:35.125286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.272 [2024-11-20 15:33:35.125294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.272 [2024-11-20 15:33:35.125302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.272 [2024-11-20 15:33:35.125311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.272 [2024-11-20 15:33:35.125318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.272 [2024-11-20 15:33:35.125326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd3ca0 is same with the state(6) to be set 00:23:46.272 [2024-11-20 15:33:35.125360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.272 [2024-11-20 15:33:35.125369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.272 [2024-11-20 15:33:35.125378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00d0 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1f830 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd13b60 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0790 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcddf20 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0fc0 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.125965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.125972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2cb0 is same with the state(6) to be set 00:23:46.273 [2024-11-20 15:33:35.125994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.126005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.126013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.273 [2024-11-20 15:33:35.126021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.273 [2024-11-20 15:33:35.126030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.274 [2024-11-20 15:33:35.126037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.126045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.274 [2024-11-20 15:33:35.126052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.126060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2850 is same with the state(6) to be set 00:23:46.274 [2024-11-20 15:33:35.126083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.274 [2024-11-20 15:33:35.126093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.126101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.274 [2024-11-20 15:33:35.126109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.126117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.274 [2024-11-20 15:33:35.126124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.126133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.274 [2024-11-20 15:33:35.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.126148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ca610 is same with the state(6) to be set 00:23:46.274 [2024-11-20 15:33:35.145152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd3ca0 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf00d0 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1f830 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd13b60 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0790 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcddf20 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0fc0 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b2cb0 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b2850 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ca610 (9): Bad file descriptor 00:23:46.274 [2024-11-20 15:33:35.145403] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.274 [2024-11-20 15:33:35.145451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.274 [2024-11-20 15:33:35.145813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.274 [2024-11-20 15:33:35.145820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.145987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.145995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.275 [2024-11-20 15:33:35.146348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.275 [2024-11-20 15:33:35.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.146985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.146995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.276 [2024-11-20 15:33:35.147111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.276 [2024-11-20 15:33:35.147121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.277 [2024-11-20 15:33:35.147707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.277 [2024-11-20 15:33:35.147717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.147957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.147964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.278 [2024-11-20 15:33:35.148366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.278 [2024-11-20 15:33:35.148373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.148880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.148887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.157607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.157651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.157664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.157673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.157684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.157693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.157703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.157710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.279 [2024-11-20 15:33:35.157720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.279 [2024-11-20 15:33:35.157728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.157902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.157909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.158219] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:46.280 [2024-11-20 15:33:35.158268] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:46.280 [2024-11-20 15:33:35.158281] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:46.280 [2024-11-20 15:33:35.162313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.280 [2024-11-20 15:33:35.162766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.280 [2024-11-20 15:33:35.162774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.162981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.162990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.281 [2024-11-20 15:33:35.163356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.281 [2024-11-20 15:33:35.163364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.163477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.163484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.164983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.164993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.282 [2024-11-20 15:33:35.165233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.282 [2024-11-20 15:33:35.165240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.283 [2024-11-20 15:33:35.165837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.283 [2024-11-20 15:33:35.165846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.165864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.165881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.165898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.165914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.165932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.165948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.165956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.284 [2024-11-20 15:33:35.167797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.284 [2024-11-20 15:33:35.167804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.167981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.167997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.168392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.285 [2024-11-20 15:33:35.168400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.285 [2024-11-20 15:33:35.169941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:46.285 [2024-11-20 15:33:35.170002] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170019] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170031] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170046] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170061] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170073] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170086] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170099] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170110] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:46.286 [2024-11-20 15:33:35.170146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.286 [2024-11-20 15:33:35.170571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.286 [2024-11-20 15:33:35.170580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.170987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.170996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.287 [2024-11-20 15:33:35.171150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.287 [2024-11-20 15:33:35.171162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.171179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.171249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.171266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.171275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab6e00 is same with the state(6) to be set 00:23:46.288 [2024-11-20 15:33:35.172828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.172984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.172991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.288 [2024-11-20 15:33:35.173319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.288 [2024-11-20 15:33:35.173329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.289 [2024-11-20 15:33:35.173848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.289 [2024-11-20 15:33:35.173857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.173877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.173894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.173912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.173948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.173965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.173972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.290 [2024-11-20 15:33:35.175716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.290 [2024-11-20 15:33:35.175726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.175985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.175993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.291 [2024-11-20 15:33:35.176226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.291 [2024-11-20 15:33:35.176236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.176385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.176395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.180886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.180938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.180948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.180967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.292 [2024-11-20 15:33:35.182742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.292 [2024-11-20 15:33:35.182750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.182985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.293 [2024-11-20 15:33:35.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.293 [2024-11-20 15:33:35.183363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.183510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.294 [2024-11-20 15:33:35.183517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.294 [2024-11-20 15:33:35.185356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.185379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.185389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.185400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.185410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.185766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.294 [2024-11-20 15:33:35.185785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b2850 with addr=10.0.0.2, port=4420 00:23:46.294 [2024-11-20 15:33:35.185794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2850 is same with the state(6) to be set 00:23:46.294 [2024-11-20 15:33:35.185839] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:46.294 [2024-11-20 15:33:35.185853] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:46.294 [2024-11-20 15:33:35.185867] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:46.294 [2024-11-20 15:33:35.185879] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:46.294 [2024-11-20 15:33:35.185894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b2850 (9): Bad file descriptor 00:23:46.294 [2024-11-20 15:33:35.203476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.203502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:46.294 [2024-11-20 15:33:35.203514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:46.556 task offset: 24576 on job bdev=Nvme2n1 fails 00:23:46.556 00:23:46.556 Latency(us) 00:23:46.556 [2024-11-20T14:33:35.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.556 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme1n1 ended in about 1.00 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme1n1 : 1.00 196.32 12.27 64.10 0.00 243095.50 4860.59 235929.60 00:23:46.556 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme2n1 ended in about 0.99 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme2n1 : 0.99 194.84 12.18 64.95 0.00 238781.44 15073.28 260396.37 00:23:46.556 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme3n1 ended in about 1.00 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme3n1 : 1.00 191.79 11.99 63.93 0.00 237924.27 13981.01 242920.11 00:23:46.556 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme4n1 ended in about 1.01 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme4n1 : 1.01 194.43 12.15 63.49 0.00 231177.40 18896.21 239424.85 00:23:46.556 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme5n1 ended in about 1.01 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme5n1 : 1.01 189.98 11.87 63.33 0.00 230638.72 21408.43 251658.24 00:23:46.556 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme6n1 ended in about 0.99 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme6n1 : 0.99 193.82 12.11 64.61 0.00 220768.00 18459.31 235929.60 00:23:46.556 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme7n1 ended in about 0.99 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme7n1 : 0.99 194.57 12.16 64.86 0.00 214993.92 15947.09 244667.73 00:23:46.556 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme8n1 ended in about 0.99 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme8n1 : 0.99 194.34 12.15 64.78 0.00 210429.23 16056.32 248162.99 00:23:46.556 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme9n1 ended in about 0.99 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme9n1 : 0.99 128.89 8.06 64.45 0.00 275944.39 13871.79 256901.12 00:23:46.556 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.556 Job: Nvme10n1 ended in about 1.00 seconds with error 00:23:46.556 Verification LBA range: start 0x0 length 0x400 00:23:46.556 Nvme10n1 : 1.00 128.58 8.04 64.29 0.00 270483.06 19879.25 269134.51 00:23:46.556 [2024-11-20T14:33:35.516Z] =================================================================================================================== 00:23:46.556 [2024-11-20T14:33:35.516Z] Total : 1807.56 112.97 642.77 0.00 235545.14 4860.59 269134.51 00:23:46.556 [2024-11-20 15:33:35.228978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:46.556 [2024-11-20 15:33:35.229028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:46.556 [2024-11-20 15:33:35.229422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.556 [2024-11-20 15:33:35.229444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd3ca0 with addr=10.0.0.2, port=4420 00:23:46.556 [2024-11-20 15:33:35.229455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd3ca0 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.229624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.229635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ca610 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.229643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ca610 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.229986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.229997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd13b60 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.230005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd13b60 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.230473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.230514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf00d0 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.230527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00d0 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.230893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.230906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1f830 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.230914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1f830 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.230957] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:46.557 [2024-11-20 15:33:35.230974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1f830 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.230991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf00d0 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.231005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd13b60 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.231019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ca610 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.231034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd3ca0 (9): Bad file descriptor 00:23:46.557 1807.56 IOPS, 112.97 MiB/s [2024-11-20T14:33:35.517Z] [2024-11-20 15:33:35.232731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.232750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b2cb0 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.232758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2cb0 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.233078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.233090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0790 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.233099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0790 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.233442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.233454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b0fc0 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.233461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0fc0 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.233790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.557 [2024-11-20 15:33:35.233803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcddf20 with addr=10.0.0.2, port=4420 00:23:46.557 [2024-11-20 15:33:35.233810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcddf20 is same with the state(6) to be set 00:23:46.557 [2024-11-20 15:33:35.233826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.233833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.233842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.233851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.233878] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:46.557 [2024-11-20 15:33:35.233892] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:46.557 [2024-11-20 15:33:35.233905] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:46.557 [2024-11-20 15:33:35.233916] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:46.557 [2024-11-20 15:33:35.233930] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:46.557 [2024-11-20 15:33:35.234036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b2cb0 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.234051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0790 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.234060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0fc0 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.234071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcddf20 (9): Bad file descriptor 00:23:46.557 [2024-11-20 15:33:35.234080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.234095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.234104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.234112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.234129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.234137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.234144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.234164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.234171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.234178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.234192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.234198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.234206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.234221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.234228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.234825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:46.557 [2024-11-20 15:33:35.234845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:46.557 [2024-11-20 15:33:35.234861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:46.557 [2024-11-20 15:33:35.234869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:46.557 [2024-11-20 15:33:35.234876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:46.557 [2024-11-20 15:33:35.234882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:46.558 [2024-11-20 15:33:35.234889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:46.558 [2024-11-20 15:33:35.234896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:46.558 [2024-11-20 15:33:35.234904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:46.558 [2024-11-20 15:33:35.234911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:46.558 [2024-11-20 15:33:35.234918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:46.558 [2024-11-20 15:33:35.234925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:46.558 [2024-11-20 15:33:35.234932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:46.558 [2024-11-20 15:33:35.234938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:46.558 [2024-11-20 15:33:35.234948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:46.558 [2024-11-20 15:33:35.234955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:46.558 [2024-11-20 15:33:35.235305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.558 [2024-11-20 15:33:35.235320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b2850 with addr=10.0.0.2, port=4420 00:23:46.558 [2024-11-20 15:33:35.235328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2850 is same with the state(6) to be set 00:23:46.558 [2024-11-20 15:33:35.235357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b2850 (9): Bad file descriptor 00:23:46.558 [2024-11-20 15:33:35.235385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:46.558 [2024-11-20 15:33:35.235393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:46.558 [2024-11-20 15:33:35.235400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:46.558 [2024-11-20 15:33:35.235408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:46.558 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 670910 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 670910 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 670910 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.501 rmmod nvme_tcp 00:23:47.501 rmmod nvme_fabrics 00:23:47.501 rmmod nvme_keyring 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 670527 ']' 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 670527 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 670527 ']' 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 670527 00:23:47.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (670527) - No such process 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 670527 is not found' 00:23:47.501 Process with pid 670527 is not found 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.501 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.047 00:23:50.047 real 0m7.577s 00:23:50.047 user 0m18.013s 00:23:50.047 sys 0m1.263s 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.047 ************************************ 00:23:50.047 END TEST nvmf_shutdown_tc3 00:23:50.047 ************************************ 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:50.047 ************************************ 00:23:50.047 START TEST nvmf_shutdown_tc4 00:23:50.047 ************************************ 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.047 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:50.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:50.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:50.048 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:50.048 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.048 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:23:50.049 00:23:50.049 --- 10.0.0.2 ping statistics --- 00:23:50.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.049 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:23:50.049 00:23:50.049 --- 10.0.0.1 ping statistics --- 00:23:50.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.049 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.049 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.049 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:50.049 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.049 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.049 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=672146 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 672146 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 672146 ']' 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.311 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:50.311 [2024-11-20 15:33:39.081781] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:23:50.311 [2024-11-20 15:33:39.081849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.311 [2024-11-20 15:33:39.178521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.311 [2024-11-20 15:33:39.212499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.311 [2024-11-20 15:33:39.212531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.311 [2024-11-20 15:33:39.212536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.311 [2024-11-20 15:33:39.212541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.311 [2024-11-20 15:33:39.212546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.311 [2024-11-20 15:33:39.213864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.311 [2024-11-20 15:33:39.213987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.311 [2024-11-20 15:33:39.214103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.311 [2024-11-20 15:33:39.214105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.252 [2024-11-20 15:33:39.908444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.252 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.253 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.253 Malloc1 00:23:51.253 [2024-11-20 15:33:40.024055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.253 Malloc2 00:23:51.253 Malloc3 00:23:51.253 Malloc4 00:23:51.253 Malloc5 00:23:51.253 Malloc6 00:23:51.513 Malloc7 00:23:51.513 Malloc8 00:23:51.513 Malloc9 00:23:51.513 Malloc10 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=672434 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:51.513 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:51.774 [2024-11-20 15:33:40.503669] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 672146 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 672146 ']' 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 672146 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672146 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672146' 00:23:57.219 killing process with pid 672146 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 672146 00:23:57.219 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 672146 00:23:57.220 [2024-11-20 15:33:45.498742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2390 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.498784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2390 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2860 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f19f0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f19f0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.499457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f19f0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3200 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3200 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3200 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3200 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719e20 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719e20 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719e20 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719e20 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a310 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a310 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.503736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a310 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2d30 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2d30 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2d30 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2d30 is same with the state(6) to be set 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 [2024-11-20 15:33:45.504466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171acb0 is same with the state(6) to be set 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 [2024-11-20 15:33:45.504478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171acb0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171acb0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171acb0 is same with the state(6) to be set 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 [2024-11-20 15:33:45.504675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 [2024-11-20 15:33:45.504699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 starting I/O failed: -6 00:23:57.220 [2024-11-20 15:33:45.504704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 [2024-11-20 15:33:45.504715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b1a0 is same with the state(6) to be set 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 [2024-11-20 15:33:45.504995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b670 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b670 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b670 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.504985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ [2024-11-20 15:33:45.505021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b670 is same with transport error -6 (No such device or address) on qpair id 3 00:23:57.220 the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b670 is same with the state(6) to be set 00:23:57.220 starting I/O failed: -6 00:23:57.220 starting I/O failed: -6 00:23:57.220 [2024-11-20 15:33:45.505251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 starting I/O failed: -6 00:23:57.220 [2024-11-20 15:33:45.505266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 [2024-11-20 15:33:45.505297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a7e0 is same with the state(6) to be set 00:23:57.220 starting I/O failed: -6 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.220 Write completed with error (sct=0, sc=8) 00:23:57.220 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 [2024-11-20 15:33:45.506817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 [2024-11-20 15:33:45.507007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c010 is same with the state(6) to be set 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 [2024-11-20 15:33:45.507022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c010 is same with the state(6) to be set 00:23:57.221 starting I/O failed: -6 00:23:57.221 [2024-11-20 15:33:45.507028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c010 is same with the state(6) to be set 00:23:57.221 [2024-11-20 15:33:45.507034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c010 is same with the state(6) to be set 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 [2024-11-20 15:33:45.507241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 [2024-11-20 15:33:45.507254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 [2024-11-20 15:33:45.507260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with starting I/O failed: -6 00:23:57.221 the state(6) to be set 00:23:57.221 [2024-11-20 15:33:45.507266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 [2024-11-20 15:33:45.507272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 starting I/O failed: -6 00:23:57.221 [2024-11-20 15:33:45.507277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 [2024-11-20 15:33:45.507283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 [2024-11-20 15:33:45.507288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 starting I/O failed: -6 00:23:57.221 [2024-11-20 15:33:45.507293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c4e0 is same with the state(6) to be set 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.221 Write completed with error (sct=0, sc=8) 00:23:57.221 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 [2024-11-20 15:33:45.507755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c9b0 is same with the state(6) to be set 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 [2024-11-20 15:33:45.507770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c9b0 is same with the state(6) to be set 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 [2024-11-20 15:33:45.507943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171bb40 is same with the state(6) to be set 00:23:57.222 [2024-11-20 15:33:45.507958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171bb40 is same with the state(6) to be set 00:23:57.222 [2024-11-20 15:33:45.507963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171bb40 is same with the state(6) to be set 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 [2024-11-20 15:33:45.508238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.222 NVMe io qpair process completion error 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 [2024-11-20 15:33:45.509421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 [2024-11-20 15:33:45.510221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 Write completed with error (sct=0, sc=8) 00:23:57.222 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 [2024-11-20 15:33:45.511141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 [2024-11-20 15:33:45.513010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.223 NVMe io qpair process completion error 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.223 starting I/O failed: -6 00:23:57.223 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 [2024-11-20 15:33:45.514128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 [2024-11-20 15:33:45.514972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 [2024-11-20 15:33:45.515907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.224 starting I/O failed: -6 00:23:57.224 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 [2024-11-20 15:33:45.518058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.225 NVMe io qpair process completion error 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 [2024-11-20 15:33:45.519294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 [2024-11-20 15:33:45.520122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.225 Write completed with error (sct=0, sc=8) 00:23:57.225 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 [2024-11-20 15:33:45.521048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 [2024-11-20 15:33:45.522503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.226 NVMe io qpair process completion error 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 starting I/O failed: -6 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.226 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 [2024-11-20 15:33:45.523709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.227 starting I/O failed: -6 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 [2024-11-20 15:33:45.524586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 [2024-11-20 15:33:45.525536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.227 starting I/O failed: -6 00:23:57.227 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 [2024-11-20 15:33:45.527250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.228 NVMe io qpair process completion error 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 [2024-11-20 15:33:45.528665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 [2024-11-20 15:33:45.529511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.228 starting I/O failed: -6 00:23:57.228 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 [2024-11-20 15:33:45.530448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.229 Write completed with error (sct=0, sc=8) 00:23:57.229 starting I/O failed: -6 00:23:57.230 [2024-11-20 15:33:45.533837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.230 NVMe io qpair process completion error 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 [2024-11-20 15:33:45.535016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 [2024-11-20 15:33:45.536039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 [2024-11-20 15:33:45.536952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.230 starting I/O failed: -6 00:23:57.230 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 [2024-11-20 15:33:45.538575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.231 NVMe io qpair process completion error 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 [2024-11-20 15:33:45.540190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.231 starting I/O failed: -6 00:23:57.231 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 [2024-11-20 15:33:45.541590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 [2024-11-20 15:33:45.544085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.232 NVMe io qpair process completion error 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 starting I/O failed: -6 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.232 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 [2024-11-20 15:33:45.545207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 [2024-11-20 15:33:45.546100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 [2024-11-20 15:33:45.547028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.233 starting I/O failed: -6 00:23:57.233 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 [2024-11-20 15:33:45.548483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.234 NVMe io qpair process completion error 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 [2024-11-20 15:33:45.549705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 starting I/O failed: -6 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.234 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 [2024-11-20 15:33:45.550523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 [2024-11-20 15:33:45.551450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.235 Write completed with error (sct=0, sc=8) 00:23:57.235 starting I/O failed: -6 00:23:57.236 Write completed with error (sct=0, sc=8) 00:23:57.236 starting I/O failed: -6 00:23:57.236 Write completed with error (sct=0, sc=8) 00:23:57.236 starting I/O failed: -6 00:23:57.236 Write completed with error (sct=0, sc=8) 00:23:57.236 starting I/O failed: -6 00:23:57.236 Write completed with error (sct=0, sc=8) 00:23:57.236 starting I/O failed: -6 00:23:57.236 [2024-11-20 15:33:45.555414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.236 NVMe io qpair process completion error 00:23:57.236 Initializing NVMe Controllers 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:57.236 Controller IO queue size 128, less than required. 00:23:57.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:57.236 Initialization complete. Launching workers. 00:23:57.236 ======================================================== 00:23:57.236 Latency(us) 00:23:57.236 Device Information : IOPS MiB/s Average min max 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1931.27 82.98 66294.06 635.84 123472.76 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1852.64 79.61 69136.55 845.87 152788.35 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1902.67 81.76 67337.89 620.12 121697.39 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1895.74 81.46 67611.67 611.50 123658.78 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1926.43 82.78 66578.43 888.72 129880.94 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1877.65 80.68 68332.20 489.16 117991.46 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1886.49 81.06 68042.05 792.14 133729.92 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1890.90 81.25 67903.96 849.83 134895.43 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1890.06 81.21 67254.11 516.36 116469.91 00:23:57.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1887.33 81.10 67369.65 815.34 122723.14 00:23:57.236 ======================================================== 00:23:57.236 Total : 18941.17 813.88 67577.56 489.16 152788.35 00:23:57.236 00:23:57.236 [2024-11-20 15:33:45.559696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ecae0 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ea890 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ea560 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eba70 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ec900 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eabc0 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eaef0 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eb740 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ec720 is same with the state(6) to be set 00:23:57.236 [2024-11-20 15:33:45.559988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eb410 is same with the state(6) to be set 00:23:57.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:57.236 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 672434 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 672434 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 672434 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.808 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.808 rmmod nvme_tcp 00:23:58.069 rmmod nvme_fabrics 00:23:58.069 rmmod nvme_keyring 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 672146 ']' 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 672146 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 672146 ']' 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 672146 00:23:58.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (672146) - No such process 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 672146 is not found' 00:23:58.069 Process with pid 672146 is not found 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.069 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.983 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.983 00:23:59.983 real 0m10.272s 00:23:59.983 user 0m27.939s 00:23:59.983 sys 0m3.983s 00:23:59.983 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.983 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:59.983 ************************************ 00:23:59.983 END TEST nvmf_shutdown_tc4 00:23:59.983 ************************************ 00:23:59.983 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:00.243 00:24:00.243 real 0m43.053s 00:24:00.243 user 1m42.875s 00:24:00.243 sys 0m14.027s 00:24:00.243 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.243 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:00.243 ************************************ 00:24:00.243 END TEST nvmf_shutdown 00:24:00.243 ************************************ 00:24:00.243 15:33:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:00.243 15:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.243 15:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.243 15:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:00.243 ************************************ 00:24:00.243 START TEST nvmf_nsid 00:24:00.243 ************************************ 00:24:00.243 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:00.243 * Looking for test storage... 00:24:00.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:00.243 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:00.243 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:00.243 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.503 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:00.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.504 --rc genhtml_branch_coverage=1 00:24:00.504 --rc genhtml_function_coverage=1 00:24:00.504 --rc genhtml_legend=1 00:24:00.504 --rc geninfo_all_blocks=1 00:24:00.504 --rc geninfo_unexecuted_blocks=1 00:24:00.504 00:24:00.504 ' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:00.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.504 --rc genhtml_branch_coverage=1 00:24:00.504 --rc genhtml_function_coverage=1 00:24:00.504 --rc genhtml_legend=1 00:24:00.504 --rc geninfo_all_blocks=1 00:24:00.504 --rc geninfo_unexecuted_blocks=1 00:24:00.504 00:24:00.504 ' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:00.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.504 --rc genhtml_branch_coverage=1 00:24:00.504 --rc genhtml_function_coverage=1 00:24:00.504 --rc genhtml_legend=1 00:24:00.504 --rc geninfo_all_blocks=1 00:24:00.504 --rc geninfo_unexecuted_blocks=1 00:24:00.504 00:24:00.504 ' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:00.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.504 --rc genhtml_branch_coverage=1 00:24:00.504 --rc genhtml_function_coverage=1 00:24:00.504 --rc genhtml_legend=1 00:24:00.504 --rc geninfo_all_blocks=1 00:24:00.504 --rc geninfo_unexecuted_blocks=1 00:24:00.504 00:24:00.504 ' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.504 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.655 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:08.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:08.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:08.656 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:08.656 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:24:08.656 00:24:08.656 --- 10.0.0.2 ping statistics --- 00:24:08.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.656 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:24:08.656 00:24:08.656 --- 10.0.0.1 ping statistics --- 00:24:08.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.656 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=677849 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 677849 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 677849 ']' 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.656 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:08.656 [2024-11-20 15:33:56.868816] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:24:08.656 [2024-11-20 15:33:56.868882] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.656 [2024-11-20 15:33:56.967155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.656 [2024-11-20 15:33:57.018941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.656 [2024-11-20 15:33:57.018991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.656 [2024-11-20 15:33:57.019005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.656 [2024-11-20 15:33:57.019012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.656 [2024-11-20 15:33:57.019018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.656 [2024-11-20 15:33:57.019777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=678125 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=29bb642d-a079-43f6-a640-267c1fb38e8d 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1c4d5c89-eef5-4b0a-9207-47a867582703 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fabab9f9-30ac-486c-ac76-e498fe4bad79 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:08.918 null0 00:24:08.918 null1 00:24:08.918 null2 00:24:08.918 [2024-11-20 15:33:57.786131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.918 [2024-11-20 15:33:57.790870] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:24:08.918 [2024-11-20 15:33:57.790930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678125 ] 00:24:08.918 [2024-11-20 15:33:57.810437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 678125 /var/tmp/tgt2.sock 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 678125 ']' 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:08.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.918 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:09.178 [2024-11-20 15:33:57.883990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.178 [2024-11-20 15:33:57.938389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.440 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.440 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:09.440 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:09.701 [2024-11-20 15:33:58.504618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.701 [2024-11-20 15:33:58.520808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:09.701 nvme0n1 nvme0n2 00:24:09.701 nvme1n1 00:24:09.701 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:09.702 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:09.702 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:11.086 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:11.087 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:11.087 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:11.087 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:11.087 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:11.087 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 29bb642d-a079-43f6-a640-267c1fb38e8d 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=29bb642da07943f6a640267c1fb38e8d 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 29BB642DA07943F6A640267C1FB38E8D 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 29BB642DA07943F6A640267C1FB38E8D == \2\9\B\B\6\4\2\D\A\0\7\9\4\3\F\6\A\6\4\0\2\6\7\C\1\F\B\3\8\E\8\D ]] 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1c4d5c89-eef5-4b0a-9207-47a867582703 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1c4d5c89eef54b0a920747a867582703 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1C4D5C89EEF54B0A920747A867582703 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1C4D5C89EEF54B0A920747A867582703 == \1\C\4\D\5\C\8\9\E\E\F\5\4\B\0\A\9\2\0\7\4\7\A\8\6\7\5\8\2\7\0\3 ]] 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fabab9f9-30ac-486c-ac76-e498fe4bad79 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fabab9f930ac486cac76e498fe4bad79 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FABAB9F930AC486CAC76E498FE4BAD79 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FABAB9F930AC486CAC76E498FE4BAD79 == \F\A\B\A\B\9\F\9\3\0\A\C\4\8\6\C\A\C\7\6\E\4\9\8\F\E\4\B\A\D\7\9 ]] 00:24:12.470 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 678125 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 678125 ']' 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 678125 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 678125 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 678125' 00:24:12.730 killing process with pid 678125 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 678125 00:24:12.730 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 678125 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:12.990 rmmod nvme_tcp 00:24:12.990 rmmod nvme_fabrics 00:24:12.990 rmmod nvme_keyring 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 677849 ']' 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 677849 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 677849 ']' 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 677849 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 677849 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 677849' 00:24:12.990 killing process with pid 677849 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 677849 00:24:12.990 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 677849 00:24:13.249 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.249 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.249 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.250 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.160 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.160 00:24:15.160 real 0m15.013s 00:24:15.160 user 0m11.375s 00:24:15.160 sys 0m6.981s 00:24:15.160 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.160 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:15.160 ************************************ 00:24:15.160 END TEST nvmf_nsid 00:24:15.160 ************************************ 00:24:15.160 15:34:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:15.160 00:24:15.160 real 13m2.641s 00:24:15.160 user 27m13.646s 00:24:15.160 sys 3m57.667s 00:24:15.160 15:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.160 15:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:15.160 ************************************ 00:24:15.160 END TEST nvmf_target_extra 00:24:15.160 ************************************ 00:24:15.420 15:34:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:15.420 15:34:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:15.420 15:34:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.420 15:34:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.420 ************************************ 00:24:15.420 START TEST nvmf_host 00:24:15.420 ************************************ 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:15.421 * Looking for test storage... 00:24:15.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.421 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.681 --rc genhtml_branch_coverage=1 00:24:15.681 --rc genhtml_function_coverage=1 00:24:15.681 --rc genhtml_legend=1 00:24:15.681 --rc geninfo_all_blocks=1 00:24:15.681 --rc geninfo_unexecuted_blocks=1 00:24:15.681 00:24:15.681 ' 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.681 --rc genhtml_branch_coverage=1 00:24:15.681 --rc genhtml_function_coverage=1 00:24:15.681 --rc genhtml_legend=1 00:24:15.681 --rc geninfo_all_blocks=1 00:24:15.681 --rc geninfo_unexecuted_blocks=1 00:24:15.681 00:24:15.681 ' 00:24:15.681 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.681 --rc genhtml_branch_coverage=1 00:24:15.681 --rc genhtml_function_coverage=1 00:24:15.681 --rc genhtml_legend=1 00:24:15.682 --rc geninfo_all_blocks=1 00:24:15.682 --rc geninfo_unexecuted_blocks=1 00:24:15.682 00:24:15.682 ' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:15.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.682 --rc genhtml_branch_coverage=1 00:24:15.682 --rc genhtml_function_coverage=1 00:24:15.682 --rc genhtml_legend=1 00:24:15.682 --rc geninfo_all_blocks=1 00:24:15.682 --rc geninfo_unexecuted_blocks=1 00:24:15.682 00:24:15.682 ' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.682 ************************************ 00:24:15.682 START TEST nvmf_multicontroller 00:24:15.682 ************************************ 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:15.682 * Looking for test storage... 00:24:15.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:15.682 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:15.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.943 --rc genhtml_branch_coverage=1 00:24:15.943 --rc genhtml_function_coverage=1 00:24:15.943 --rc genhtml_legend=1 00:24:15.943 --rc geninfo_all_blocks=1 00:24:15.943 --rc geninfo_unexecuted_blocks=1 00:24:15.943 00:24:15.943 ' 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:15.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.943 --rc genhtml_branch_coverage=1 00:24:15.943 --rc genhtml_function_coverage=1 00:24:15.943 --rc genhtml_legend=1 00:24:15.943 --rc geninfo_all_blocks=1 00:24:15.943 --rc geninfo_unexecuted_blocks=1 00:24:15.943 00:24:15.943 ' 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:15.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.943 --rc genhtml_branch_coverage=1 00:24:15.943 --rc genhtml_function_coverage=1 00:24:15.943 --rc genhtml_legend=1 00:24:15.943 --rc geninfo_all_blocks=1 00:24:15.943 --rc geninfo_unexecuted_blocks=1 00:24:15.943 00:24:15.943 ' 00:24:15.943 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:15.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.944 --rc genhtml_branch_coverage=1 00:24:15.944 --rc genhtml_function_coverage=1 00:24:15.944 --rc genhtml_legend=1 00:24:15.944 --rc geninfo_all_blocks=1 00:24:15.944 --rc geninfo_unexecuted_blocks=1 00:24:15.944 00:24:15.944 ' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.944 15:34:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.082 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:24.083 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:24.083 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:24.083 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:24.083 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.083 15:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.825 ms 00:24:24.083 00:24:24.083 --- 10.0.0.2 ping statistics --- 00:24:24.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.083 rtt min/avg/max/mdev = 0.825/0.825/0.825/0.000 ms 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:24:24.083 00:24:24.083 --- 10.0.0.1 ping statistics --- 00:24:24.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.083 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.083 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=683228 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 683228 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 683228 ']' 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.084 15:34:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.084 [2024-11-20 15:34:12.273079] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:24:24.084 [2024-11-20 15:34:12.273146] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.084 [2024-11-20 15:34:12.371452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:24.084 [2024-11-20 15:34:12.423917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.084 [2024-11-20 15:34:12.423971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.084 [2024-11-20 15:34:12.423980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.084 [2024-11-20 15:34:12.423987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.084 [2024-11-20 15:34:12.423994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.084 [2024-11-20 15:34:12.425830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.084 [2024-11-20 15:34:12.425987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.084 [2024-11-20 15:34:12.425989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 [2024-11-20 15:34:13.136693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 Malloc0 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 [2024-11-20 15:34:13.216126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 [2024-11-20 15:34:13.227992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 Malloc1 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.345 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=683432 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 683432 /var/tmp/bdevperf.sock 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 683432 ']' 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.607 15:34:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.550 NVMe0n1 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.550 1 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.550 request: 00:24:25.550 { 00:24:25.550 "name": "NVMe0", 00:24:25.550 "trtype": "tcp", 00:24:25.550 "traddr": "10.0.0.2", 00:24:25.550 "adrfam": "ipv4", 00:24:25.550 "trsvcid": "4420", 00:24:25.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.550 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:25.550 "hostaddr": "10.0.0.1", 00:24:25.550 "prchk_reftag": false, 00:24:25.550 "prchk_guard": false, 00:24:25.550 "hdgst": false, 00:24:25.550 "ddgst": false, 00:24:25.550 "allow_unrecognized_csi": false, 00:24:25.550 "method": "bdev_nvme_attach_controller", 00:24:25.550 "req_id": 1 00:24:25.550 } 00:24:25.550 Got JSON-RPC error response 00:24:25.550 response: 00:24:25.550 { 00:24:25.550 "code": -114, 00:24:25.550 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:25.550 } 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.550 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.551 request: 00:24:25.551 { 00:24:25.551 "name": "NVMe0", 00:24:25.551 "trtype": "tcp", 00:24:25.551 "traddr": "10.0.0.2", 00:24:25.551 "adrfam": "ipv4", 00:24:25.551 "trsvcid": "4420", 00:24:25.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:25.551 "hostaddr": "10.0.0.1", 00:24:25.551 "prchk_reftag": false, 00:24:25.551 "prchk_guard": false, 00:24:25.551 "hdgst": false, 00:24:25.551 "ddgst": false, 00:24:25.551 "allow_unrecognized_csi": false, 00:24:25.551 "method": "bdev_nvme_attach_controller", 00:24:25.551 "req_id": 1 00:24:25.551 } 00:24:25.551 Got JSON-RPC error response 00:24:25.551 response: 00:24:25.551 { 00:24:25.551 "code": -114, 00:24:25.551 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:25.551 } 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.551 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 request: 00:24:25.813 { 00:24:25.813 "name": "NVMe0", 00:24:25.813 "trtype": "tcp", 00:24:25.813 "traddr": "10.0.0.2", 00:24:25.813 "adrfam": "ipv4", 00:24:25.813 "trsvcid": "4420", 00:24:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.813 "hostaddr": "10.0.0.1", 00:24:25.813 "prchk_reftag": false, 00:24:25.813 "prchk_guard": false, 00:24:25.813 "hdgst": false, 00:24:25.813 "ddgst": false, 00:24:25.813 "multipath": "disable", 00:24:25.813 "allow_unrecognized_csi": false, 00:24:25.813 "method": "bdev_nvme_attach_controller", 00:24:25.813 "req_id": 1 00:24:25.813 } 00:24:25.813 Got JSON-RPC error response 00:24:25.813 response: 00:24:25.813 { 00:24:25.813 "code": -114, 00:24:25.813 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:25.813 } 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 request: 00:24:25.813 { 00:24:25.813 "name": "NVMe0", 00:24:25.813 "trtype": "tcp", 00:24:25.813 "traddr": "10.0.0.2", 00:24:25.813 "adrfam": "ipv4", 00:24:25.813 "trsvcid": "4420", 00:24:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.813 "hostaddr": "10.0.0.1", 00:24:25.813 "prchk_reftag": false, 00:24:25.813 "prchk_guard": false, 00:24:25.813 "hdgst": false, 00:24:25.813 "ddgst": false, 00:24:25.813 "multipath": "failover", 00:24:25.813 "allow_unrecognized_csi": false, 00:24:25.813 "method": "bdev_nvme_attach_controller", 00:24:25.813 "req_id": 1 00:24:25.813 } 00:24:25.813 Got JSON-RPC error response 00:24:25.813 response: 00:24:25.813 { 00:24:25.813 "code": -114, 00:24:25.813 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:25.813 } 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 NVMe0n1 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:25.813 15:34:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.197 { 00:24:27.197 "results": [ 00:24:27.197 { 00:24:27.197 "job": "NVMe0n1", 00:24:27.197 "core_mask": "0x1", 00:24:27.197 "workload": "write", 00:24:27.197 "status": "finished", 00:24:27.197 "queue_depth": 128, 00:24:27.197 "io_size": 4096, 00:24:27.197 "runtime": 1.006859, 00:24:27.197 "iops": 21635.601409929295, 00:24:27.197 "mibps": 84.51406800753631, 00:24:27.197 "io_failed": 0, 00:24:27.197 "io_timeout": 0, 00:24:27.197 "avg_latency_us": 5902.950725915045, 00:24:27.197 "min_latency_us": 2102.6133333333332, 00:24:27.197 "max_latency_us": 16602.453333333335 00:24:27.197 } 00:24:27.197 ], 00:24:27.197 "core_count": 1 00:24:27.197 } 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 683432 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 683432 ']' 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 683432 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683432 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683432' 00:24:27.197 killing process with pid 683432 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 683432 00:24:27.197 15:34:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 683432 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.197 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:27.198 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:27.198 [2024-11-20 15:34:13.357604] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:24:27.198 [2024-11-20 15:34:13.357675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683432 ] 00:24:27.198 [2024-11-20 15:34:13.449714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.198 [2024-11-20 15:34:13.504676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.198 [2024-11-20 15:34:14.692114] bdev.c:4717:bdev_name_add: *ERROR*: Bdev name 22ffe149-8de2-473a-9428-1cca33fd7891 already exists 00:24:27.198 [2024-11-20 15:34:14.692156] bdev.c:7917:bdev_register: *ERROR*: Unable to add uuid:22ffe149-8de2-473a-9428-1cca33fd7891 alias for bdev NVMe1n1 00:24:27.198 [2024-11-20 15:34:14.692173] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:27.198 Running I/O for 1 seconds... 00:24:27.198 21579.00 IOPS, 84.29 MiB/s 00:24:27.198 Latency(us) 00:24:27.198 [2024-11-20T14:34:16.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.198 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:27.198 NVMe0n1 : 1.01 21635.60 84.51 0.00 0.00 5902.95 2102.61 16602.45 00:24:27.198 [2024-11-20T14:34:16.158Z] =================================================================================================================== 00:24:27.198 [2024-11-20T14:34:16.158Z] Total : 21635.60 84.51 0.00 0.00 5902.95 2102.61 16602.45 00:24:27.198 Received shutdown signal, test time was about 1.000000 seconds 00:24:27.198 00:24:27.198 Latency(us) 00:24:27.198 [2024-11-20T14:34:16.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.198 [2024-11-20T14:34:16.158Z] =================================================================================================================== 00:24:27.198 [2024-11-20T14:34:16.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.198 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.198 rmmod nvme_tcp 00:24:27.198 rmmod nvme_fabrics 00:24:27.198 rmmod nvme_keyring 00:24:27.198 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 683228 ']' 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 683228 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 683228 ']' 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 683228 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683228 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683228' 00:24:27.459 killing process with pid 683228 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 683228 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 683228 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.459 15:34:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.004 00:24:30.004 real 0m13.985s 00:24:30.004 user 0m17.062s 00:24:30.004 sys 0m6.523s 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:30.004 ************************************ 00:24:30.004 END TEST nvmf_multicontroller 00:24:30.004 ************************************ 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.004 ************************************ 00:24:30.004 START TEST nvmf_aer 00:24:30.004 ************************************ 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:30.004 * Looking for test storage... 00:24:30.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.004 --rc genhtml_branch_coverage=1 00:24:30.004 --rc genhtml_function_coverage=1 00:24:30.004 --rc genhtml_legend=1 00:24:30.004 --rc geninfo_all_blocks=1 00:24:30.004 --rc geninfo_unexecuted_blocks=1 00:24:30.004 00:24:30.004 ' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.004 --rc genhtml_branch_coverage=1 00:24:30.004 --rc genhtml_function_coverage=1 00:24:30.004 --rc genhtml_legend=1 00:24:30.004 --rc geninfo_all_blocks=1 00:24:30.004 --rc geninfo_unexecuted_blocks=1 00:24:30.004 00:24:30.004 ' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.004 --rc genhtml_branch_coverage=1 00:24:30.004 --rc genhtml_function_coverage=1 00:24:30.004 --rc genhtml_legend=1 00:24:30.004 --rc geninfo_all_blocks=1 00:24:30.004 --rc geninfo_unexecuted_blocks=1 00:24:30.004 00:24:30.004 ' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.004 --rc genhtml_branch_coverage=1 00:24:30.004 --rc genhtml_function_coverage=1 00:24:30.004 --rc genhtml_legend=1 00:24:30.004 --rc geninfo_all_blocks=1 00:24:30.004 --rc geninfo_unexecuted_blocks=1 00:24:30.004 00:24:30.004 ' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.004 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.005 15:34:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.148 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.148 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.148 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.148 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.148 15:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:24:38.148 00:24:38.148 --- 10.0.0.2 ping statistics --- 00:24:38.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.148 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:24:38.148 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:24:38.149 00:24:38.149 --- 10.0.0.1 ping statistics --- 00:24:38.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.149 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=688261 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 688261 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 688261 ']' 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.149 15:34:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.149 [2024-11-20 15:34:26.385898] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:24:38.149 [2024-11-20 15:34:26.385963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.149 [2024-11-20 15:34:26.487525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.149 [2024-11-20 15:34:26.540996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.149 [2024-11-20 15:34:26.541050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.149 [2024-11-20 15:34:26.541059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.149 [2024-11-20 15:34:26.541067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.149 [2024-11-20 15:34:26.541073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.149 [2024-11-20 15:34:26.543487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.149 [2024-11-20 15:34:26.543646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.149 [2024-11-20 15:34:26.543808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.149 [2024-11-20 15:34:26.543808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 [2024-11-20 15:34:27.272299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 Malloc0 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 [2024-11-20 15:34:27.351277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.410 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.410 [ 00:24:38.410 { 00:24:38.410 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.410 "subtype": "Discovery", 00:24:38.410 "listen_addresses": [], 00:24:38.410 "allow_any_host": true, 00:24:38.410 "hosts": [] 00:24:38.410 }, 00:24:38.411 { 00:24:38.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.411 "subtype": "NVMe", 00:24:38.411 "listen_addresses": [ 00:24:38.671 { 00:24:38.671 "trtype": "TCP", 00:24:38.671 "adrfam": "IPv4", 00:24:38.671 "traddr": "10.0.0.2", 00:24:38.671 "trsvcid": "4420" 00:24:38.671 } 00:24:38.671 ], 00:24:38.671 "allow_any_host": true, 00:24:38.671 "hosts": [], 00:24:38.671 "serial_number": "SPDK00000000000001", 00:24:38.671 "model_number": "SPDK bdev Controller", 00:24:38.671 "max_namespaces": 2, 00:24:38.671 "min_cntlid": 1, 00:24:38.671 "max_cntlid": 65519, 00:24:38.671 "namespaces": [ 00:24:38.671 { 00:24:38.671 "nsid": 1, 00:24:38.671 "bdev_name": "Malloc0", 00:24:38.671 "name": "Malloc0", 00:24:38.671 "nguid": "E3B1312F56BA41879A7B06B653040F39", 00:24:38.671 "uuid": "e3b1312f-56ba-4187-9a7b-06b653040f39" 00:24:38.671 } 00:24:38.671 ] 00:24:38.671 } 00:24:38.671 ] 00:24:38.671 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=688325 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:38.672 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.933 Malloc1 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.933 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.933 Asynchronous Event Request test 00:24:38.933 Attaching to 10.0.0.2 00:24:38.933 Attached to 10.0.0.2 00:24:38.933 Registering asynchronous event callbacks... 00:24:38.933 Starting namespace attribute notice tests for all controllers... 00:24:38.933 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:38.933 aer_cb - Changed Namespace 00:24:38.933 Cleaning up... 00:24:38.933 [ 00:24:38.933 { 00:24:38.933 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.933 "subtype": "Discovery", 00:24:38.933 "listen_addresses": [], 00:24:38.933 "allow_any_host": true, 00:24:38.933 "hosts": [] 00:24:38.933 }, 00:24:38.933 { 00:24:38.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.933 "subtype": "NVMe", 00:24:38.933 "listen_addresses": [ 00:24:38.933 { 00:24:38.933 "trtype": "TCP", 00:24:38.933 "adrfam": "IPv4", 00:24:38.933 "traddr": "10.0.0.2", 00:24:38.933 "trsvcid": "4420" 00:24:38.933 } 00:24:38.933 ], 00:24:38.933 "allow_any_host": true, 00:24:38.933 "hosts": [], 00:24:38.933 "serial_number": "SPDK00000000000001", 00:24:38.934 "model_number": "SPDK bdev Controller", 00:24:38.934 "max_namespaces": 2, 00:24:38.934 "min_cntlid": 1, 00:24:38.934 "max_cntlid": 65519, 00:24:38.934 "namespaces": [ 00:24:38.934 { 00:24:38.934 "nsid": 1, 00:24:38.934 "bdev_name": "Malloc0", 00:24:38.934 "name": "Malloc0", 00:24:38.934 "nguid": "E3B1312F56BA41879A7B06B653040F39", 00:24:38.934 "uuid": "e3b1312f-56ba-4187-9a7b-06b653040f39" 00:24:38.934 }, 00:24:38.934 { 00:24:38.934 "nsid": 2, 00:24:38.934 "bdev_name": "Malloc1", 00:24:38.934 "name": "Malloc1", 00:24:38.934 "nguid": "147C731386D44EE3944D28A784C3EF52", 00:24:38.934 "uuid": "147c7313-86d4-4ee3-944d-28a784c3ef52" 00:24:38.934 } 00:24:38.934 ] 00:24:38.934 } 00:24:38.934 ] 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 688325 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.934 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.934 rmmod nvme_tcp 00:24:38.934 rmmod nvme_fabrics 00:24:38.934 rmmod nvme_keyring 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 688261 ']' 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 688261 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 688261 ']' 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 688261 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.195 15:34:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688261 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688261' 00:24:39.195 killing process with pid 688261 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 688261 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 688261 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:39.195 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.455 15:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.367 00:24:41.367 real 0m11.713s 00:24:41.367 user 0m8.655s 00:24:41.367 sys 0m6.275s 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.367 ************************************ 00:24:41.367 END TEST nvmf_aer 00:24:41.367 ************************************ 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.367 15:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.629 ************************************ 00:24:41.629 START TEST nvmf_async_init 00:24:41.629 ************************************ 00:24:41.629 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:41.629 * Looking for test storage... 00:24:41.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.629 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.629 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.629 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.630 --rc genhtml_branch_coverage=1 00:24:41.630 --rc genhtml_function_coverage=1 00:24:41.630 --rc genhtml_legend=1 00:24:41.630 --rc geninfo_all_blocks=1 00:24:41.630 --rc geninfo_unexecuted_blocks=1 00:24:41.630 00:24:41.630 ' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.630 --rc genhtml_branch_coverage=1 00:24:41.630 --rc genhtml_function_coverage=1 00:24:41.630 --rc genhtml_legend=1 00:24:41.630 --rc geninfo_all_blocks=1 00:24:41.630 --rc geninfo_unexecuted_blocks=1 00:24:41.630 00:24:41.630 ' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.630 --rc genhtml_branch_coverage=1 00:24:41.630 --rc genhtml_function_coverage=1 00:24:41.630 --rc genhtml_legend=1 00:24:41.630 --rc geninfo_all_blocks=1 00:24:41.630 --rc geninfo_unexecuted_blocks=1 00:24:41.630 00:24:41.630 ' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.630 --rc genhtml_branch_coverage=1 00:24:41.630 --rc genhtml_function_coverage=1 00:24:41.630 --rc genhtml_legend=1 00:24:41.630 --rc geninfo_all_blocks=1 00:24:41.630 --rc geninfo_unexecuted_blocks=1 00:24:41.630 00:24:41.630 ' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.630 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e0c6737895364aa68366895be872d113 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.631 15:34:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:49.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:49.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:49.775 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:49.775 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.775 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.776 15:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:24:49.776 00:24:49.776 --- 10.0.0.2 ping statistics --- 00:24:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.776 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:24:49.776 00:24:49.776 --- 10.0.0.1 ping statistics --- 00:24:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.776 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=692625 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 692625 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 692625 ']' 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.776 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.776 [2024-11-20 15:34:38.158106] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:24:49.776 [2024-11-20 15:34:38.158187] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.776 [2024-11-20 15:34:38.257357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.776 [2024-11-20 15:34:38.308506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.776 [2024-11-20 15:34:38.308556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.776 [2024-11-20 15:34:38.308565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.776 [2024-11-20 15:34:38.308574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.776 [2024-11-20 15:34:38.308581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.776 [2024-11-20 15:34:38.309380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.037 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.037 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:50.037 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:50.037 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.037 15:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 [2024-11-20 15:34:39.020036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 null0 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e0c6737895364aa68366895be872d113 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.298 [2024-11-20 15:34:39.080374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.298 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.559 nvme0n1 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.559 [ 00:24:50.559 { 00:24:50.559 "name": "nvme0n1", 00:24:50.559 "aliases": [ 00:24:50.559 "e0c67378-9536-4aa6-8366-895be872d113" 00:24:50.559 ], 00:24:50.559 "product_name": "NVMe disk", 00:24:50.559 "block_size": 512, 00:24:50.559 "num_blocks": 2097152, 00:24:50.559 "uuid": "e0c67378-9536-4aa6-8366-895be872d113", 00:24:50.559 "numa_id": 0, 00:24:50.559 "assigned_rate_limits": { 00:24:50.559 "rw_ios_per_sec": 0, 00:24:50.559 "rw_mbytes_per_sec": 0, 00:24:50.559 "r_mbytes_per_sec": 0, 00:24:50.559 "w_mbytes_per_sec": 0 00:24:50.559 }, 00:24:50.559 "claimed": false, 00:24:50.559 "zoned": false, 00:24:50.559 "supported_io_types": { 00:24:50.559 "read": true, 00:24:50.559 "write": true, 00:24:50.559 "unmap": false, 00:24:50.559 "flush": true, 00:24:50.559 "reset": true, 00:24:50.559 "nvme_admin": true, 00:24:50.559 "nvme_io": true, 00:24:50.559 "nvme_io_md": false, 00:24:50.559 "write_zeroes": true, 00:24:50.559 "zcopy": false, 00:24:50.559 "get_zone_info": false, 00:24:50.559 "zone_management": false, 00:24:50.559 "zone_append": false, 00:24:50.559 "compare": true, 00:24:50.559 "compare_and_write": true, 00:24:50.559 "abort": true, 00:24:50.559 "seek_hole": false, 00:24:50.559 "seek_data": false, 00:24:50.559 "copy": true, 00:24:50.559 "nvme_iov_md": false 00:24:50.559 }, 00:24:50.559 "memory_domains": [ 00:24:50.559 { 00:24:50.559 "dma_device_id": "system", 00:24:50.559 "dma_device_type": 1 00:24:50.559 } 00:24:50.559 ], 00:24:50.559 "driver_specific": { 00:24:50.559 "nvme": [ 00:24:50.559 { 00:24:50.559 "trid": { 00:24:50.559 "trtype": "TCP", 00:24:50.559 "adrfam": "IPv4", 00:24:50.559 "traddr": "10.0.0.2", 00:24:50.559 "trsvcid": "4420", 00:24:50.559 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:50.559 }, 00:24:50.559 "ctrlr_data": { 00:24:50.559 "cntlid": 1, 00:24:50.559 "vendor_id": "0x8086", 00:24:50.559 "model_number": "SPDK bdev Controller", 00:24:50.559 "serial_number": "00000000000000000000", 00:24:50.559 "firmware_revision": "25.01", 00:24:50.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.559 "oacs": { 00:24:50.559 "security": 0, 00:24:50.559 "format": 0, 00:24:50.559 "firmware": 0, 00:24:50.559 "ns_manage": 0 00:24:50.559 }, 00:24:50.559 "multi_ctrlr": true, 00:24:50.559 "ana_reporting": false 00:24:50.559 }, 00:24:50.559 "vs": { 00:24:50.559 "nvme_version": "1.3" 00:24:50.559 }, 00:24:50.559 "ns_data": { 00:24:50.559 "id": 1, 00:24:50.559 "can_share": true 00:24:50.559 } 00:24:50.559 } 00:24:50.559 ], 00:24:50.559 "mp_policy": "active_passive" 00:24:50.559 } 00:24:50.559 } 00:24:50.559 ] 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.559 [2024-11-20 15:34:39.354970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:50.559 [2024-11-20 15:34:39.355053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ddce0 (9): Bad file descriptor 00:24:50.559 [2024-11-20 15:34:39.487264] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.559 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.559 [ 00:24:50.559 { 00:24:50.559 "name": "nvme0n1", 00:24:50.559 "aliases": [ 00:24:50.559 "e0c67378-9536-4aa6-8366-895be872d113" 00:24:50.559 ], 00:24:50.559 "product_name": "NVMe disk", 00:24:50.559 "block_size": 512, 00:24:50.559 "num_blocks": 2097152, 00:24:50.559 "uuid": "e0c67378-9536-4aa6-8366-895be872d113", 00:24:50.559 "numa_id": 0, 00:24:50.559 "assigned_rate_limits": { 00:24:50.559 "rw_ios_per_sec": 0, 00:24:50.559 "rw_mbytes_per_sec": 0, 00:24:50.559 "r_mbytes_per_sec": 0, 00:24:50.559 "w_mbytes_per_sec": 0 00:24:50.559 }, 00:24:50.559 "claimed": false, 00:24:50.559 "zoned": false, 00:24:50.559 "supported_io_types": { 00:24:50.559 "read": true, 00:24:50.559 "write": true, 00:24:50.559 "unmap": false, 00:24:50.559 "flush": true, 00:24:50.559 "reset": true, 00:24:50.559 "nvme_admin": true, 00:24:50.559 "nvme_io": true, 00:24:50.559 "nvme_io_md": false, 00:24:50.559 "write_zeroes": true, 00:24:50.559 "zcopy": false, 00:24:50.559 "get_zone_info": false, 00:24:50.559 "zone_management": false, 00:24:50.559 "zone_append": false, 00:24:50.559 "compare": true, 00:24:50.559 "compare_and_write": true, 00:24:50.559 "abort": true, 00:24:50.559 "seek_hole": false, 00:24:50.559 "seek_data": false, 00:24:50.559 "copy": true, 00:24:50.560 "nvme_iov_md": false 00:24:50.560 }, 00:24:50.560 "memory_domains": [ 00:24:50.560 { 00:24:50.560 "dma_device_id": "system", 00:24:50.560 "dma_device_type": 1 00:24:50.560 } 00:24:50.560 ], 00:24:50.560 "driver_specific": { 00:24:50.560 "nvme": [ 00:24:50.560 { 00:24:50.560 "trid": { 00:24:50.560 "trtype": "TCP", 00:24:50.560 "adrfam": "IPv4", 00:24:50.560 "traddr": "10.0.0.2", 00:24:50.560 "trsvcid": "4420", 00:24:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:50.560 }, 00:24:50.560 "ctrlr_data": { 00:24:50.560 "cntlid": 2, 00:24:50.560 "vendor_id": "0x8086", 00:24:50.560 "model_number": "SPDK bdev Controller", 00:24:50.560 "serial_number": "00000000000000000000", 00:24:50.560 "firmware_revision": "25.01", 00:24:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.560 "oacs": { 00:24:50.560 "security": 0, 00:24:50.560 "format": 0, 00:24:50.560 "firmware": 0, 00:24:50.560 "ns_manage": 0 00:24:50.560 }, 00:24:50.560 "multi_ctrlr": true, 00:24:50.560 "ana_reporting": false 00:24:50.560 }, 00:24:50.560 "vs": { 00:24:50.560 "nvme_version": "1.3" 00:24:50.560 }, 00:24:50.560 "ns_data": { 00:24:50.560 "id": 1, 00:24:50.560 "can_share": true 00:24:50.560 } 00:24:50.560 } 00:24:50.560 ], 00:24:50.560 "mp_policy": "active_passive" 00:24:50.560 } 00:24:50.560 } 00:24:50.560 ] 00:24:50.560 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.560 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.560 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.560 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7Wo9kRqMnK 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7Wo9kRqMnK 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.7Wo9kRqMnK 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 [2024-11-20 15:34:39.575658] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.821 [2024-11-20 15:34:39.575820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 [2024-11-20 15:34:39.599733] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.821 nvme0n1 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.821 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.821 [ 00:24:50.821 { 00:24:50.821 "name": "nvme0n1", 00:24:50.821 "aliases": [ 00:24:50.821 "e0c67378-9536-4aa6-8366-895be872d113" 00:24:50.821 ], 00:24:50.821 "product_name": "NVMe disk", 00:24:50.821 "block_size": 512, 00:24:50.821 "num_blocks": 2097152, 00:24:50.821 "uuid": "e0c67378-9536-4aa6-8366-895be872d113", 00:24:50.821 "numa_id": 0, 00:24:50.821 "assigned_rate_limits": { 00:24:50.821 "rw_ios_per_sec": 0, 00:24:50.821 "rw_mbytes_per_sec": 0, 00:24:50.821 "r_mbytes_per_sec": 0, 00:24:50.821 "w_mbytes_per_sec": 0 00:24:50.821 }, 00:24:50.821 "claimed": false, 00:24:50.821 "zoned": false, 00:24:50.821 "supported_io_types": { 00:24:50.821 "read": true, 00:24:50.821 "write": true, 00:24:50.821 "unmap": false, 00:24:50.821 "flush": true, 00:24:50.821 "reset": true, 00:24:50.821 "nvme_admin": true, 00:24:50.821 "nvme_io": true, 00:24:50.821 "nvme_io_md": false, 00:24:50.821 "write_zeroes": true, 00:24:50.822 "zcopy": false, 00:24:50.822 "get_zone_info": false, 00:24:50.822 "zone_management": false, 00:24:50.822 "zone_append": false, 00:24:50.822 "compare": true, 00:24:50.822 "compare_and_write": true, 00:24:50.822 "abort": true, 00:24:50.822 "seek_hole": false, 00:24:50.822 "seek_data": false, 00:24:50.822 "copy": true, 00:24:50.822 "nvme_iov_md": false 00:24:50.822 }, 00:24:50.822 "memory_domains": [ 00:24:50.822 { 00:24:50.822 "dma_device_id": "system", 00:24:50.822 "dma_device_type": 1 00:24:50.822 } 00:24:50.822 ], 00:24:50.822 "driver_specific": { 00:24:50.822 "nvme": [ 00:24:50.822 { 00:24:50.822 "trid": { 00:24:50.822 "trtype": "TCP", 00:24:50.822 "adrfam": "IPv4", 00:24:50.822 "traddr": "10.0.0.2", 00:24:50.822 "trsvcid": "4421", 00:24:50.822 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:50.822 }, 00:24:50.822 "ctrlr_data": { 00:24:50.822 "cntlid": 3, 00:24:50.822 "vendor_id": "0x8086", 00:24:50.822 "model_number": "SPDK bdev Controller", 00:24:50.822 "serial_number": "00000000000000000000", 00:24:50.822 "firmware_revision": "25.01", 00:24:50.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.822 "oacs": { 00:24:50.822 "security": 0, 00:24:50.822 "format": 0, 00:24:50.822 "firmware": 0, 00:24:50.822 "ns_manage": 0 00:24:50.822 }, 00:24:50.822 "multi_ctrlr": true, 00:24:50.822 "ana_reporting": false 00:24:50.822 }, 00:24:50.822 "vs": { 00:24:50.822 "nvme_version": "1.3" 00:24:50.822 }, 00:24:50.822 "ns_data": { 00:24:50.822 "id": 1, 00:24:50.822 "can_share": true 00:24:50.822 } 00:24:50.822 } 00:24:50.822 ], 00:24:50.822 "mp_policy": "active_passive" 00:24:50.822 } 00:24:50.822 } 00:24:50.822 ] 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.7Wo9kRqMnK 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.822 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.822 rmmod nvme_tcp 00:24:50.822 rmmod nvme_fabrics 00:24:50.822 rmmod nvme_keyring 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 692625 ']' 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 692625 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 692625 ']' 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 692625 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 692625 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 692625' 00:24:51.084 killing process with pid 692625 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 692625 00:24:51.084 15:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 692625 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.084 15:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.634 00:24:53.634 real 0m11.778s 00:24:53.634 user 0m4.106s 00:24:53.634 sys 0m6.248s 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:53.634 ************************************ 00:24:53.634 END TEST nvmf_async_init 00:24:53.634 ************************************ 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.634 ************************************ 00:24:53.634 START TEST dma 00:24:53.634 ************************************ 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:53.634 * Looking for test storage... 00:24:53.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.634 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.635 --rc genhtml_branch_coverage=1 00:24:53.635 --rc genhtml_function_coverage=1 00:24:53.635 --rc genhtml_legend=1 00:24:53.635 --rc geninfo_all_blocks=1 00:24:53.635 --rc geninfo_unexecuted_blocks=1 00:24:53.635 00:24:53.635 ' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.635 --rc genhtml_branch_coverage=1 00:24:53.635 --rc genhtml_function_coverage=1 00:24:53.635 --rc genhtml_legend=1 00:24:53.635 --rc geninfo_all_blocks=1 00:24:53.635 --rc geninfo_unexecuted_blocks=1 00:24:53.635 00:24:53.635 ' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.635 --rc genhtml_branch_coverage=1 00:24:53.635 --rc genhtml_function_coverage=1 00:24:53.635 --rc genhtml_legend=1 00:24:53.635 --rc geninfo_all_blocks=1 00:24:53.635 --rc geninfo_unexecuted_blocks=1 00:24:53.635 00:24:53.635 ' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.635 --rc genhtml_branch_coverage=1 00:24:53.635 --rc genhtml_function_coverage=1 00:24:53.635 --rc genhtml_legend=1 00:24:53.635 --rc geninfo_all_blocks=1 00:24:53.635 --rc geninfo_unexecuted_blocks=1 00:24:53.635 00:24:53.635 ' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:53.635 00:24:53.635 real 0m0.239s 00:24:53.635 user 0m0.156s 00:24:53.635 sys 0m0.099s 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:53.635 ************************************ 00:24:53.635 END TEST dma 00:24:53.635 ************************************ 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.635 ************************************ 00:24:53.635 START TEST nvmf_identify 00:24:53.635 ************************************ 00:24:53.635 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:53.898 * Looking for test storage... 00:24:53.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.898 --rc genhtml_branch_coverage=1 00:24:53.898 --rc genhtml_function_coverage=1 00:24:53.898 --rc genhtml_legend=1 00:24:53.898 --rc geninfo_all_blocks=1 00:24:53.898 --rc geninfo_unexecuted_blocks=1 00:24:53.898 00:24:53.898 ' 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.898 --rc genhtml_branch_coverage=1 00:24:53.898 --rc genhtml_function_coverage=1 00:24:53.898 --rc genhtml_legend=1 00:24:53.898 --rc geninfo_all_blocks=1 00:24:53.898 --rc geninfo_unexecuted_blocks=1 00:24:53.898 00:24:53.898 ' 00:24:53.898 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.899 --rc genhtml_branch_coverage=1 00:24:53.899 --rc genhtml_function_coverage=1 00:24:53.899 --rc genhtml_legend=1 00:24:53.899 --rc geninfo_all_blocks=1 00:24:53.899 --rc geninfo_unexecuted_blocks=1 00:24:53.899 00:24:53.899 ' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.899 --rc genhtml_branch_coverage=1 00:24:53.899 --rc genhtml_function_coverage=1 00:24:53.899 --rc genhtml_legend=1 00:24:53.899 --rc geninfo_all_blocks=1 00:24:53.899 --rc geninfo_unexecuted_blocks=1 00:24:53.899 00:24:53.899 ' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.899 15:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:02.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:02.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:02.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:02.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.042 15:34:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.042 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.042 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.042 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:02.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:25:02.043 00:25:02.043 --- 10.0.0.2 ping statistics --- 00:25:02.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.043 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:25:02.043 00:25:02.043 --- 10.0.0.1 ping statistics --- 00:25:02.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.043 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=697356 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 697356 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 697356 ']' 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.043 15:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.043 [2024-11-20 15:34:50.372710] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:25:02.043 [2024-11-20 15:34:50.372776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.043 [2024-11-20 15:34:50.475687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.043 [2024-11-20 15:34:50.530033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.043 [2024-11-20 15:34:50.530085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.043 [2024-11-20 15:34:50.530095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.043 [2024-11-20 15:34:50.530102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.043 [2024-11-20 15:34:50.530109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.043 [2024-11-20 15:34:50.532496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.043 [2024-11-20 15:34:50.532641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.043 [2024-11-20 15:34:50.532801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.043 [2024-11-20 15:34:50.532801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.304 [2024-11-20 15:34:51.209398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.304 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 Malloc0 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 [2024-11-20 15:34:51.329385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:02.568 [ 00:25:02.568 { 00:25:02.568 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:02.568 "subtype": "Discovery", 00:25:02.568 "listen_addresses": [ 00:25:02.568 { 00:25:02.568 "trtype": "TCP", 00:25:02.568 "adrfam": "IPv4", 00:25:02.568 "traddr": "10.0.0.2", 00:25:02.568 "trsvcid": "4420" 00:25:02.568 } 00:25:02.568 ], 00:25:02.568 "allow_any_host": true, 00:25:02.568 "hosts": [] 00:25:02.568 }, 00:25:02.568 { 00:25:02.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:02.568 "subtype": "NVMe", 00:25:02.568 "listen_addresses": [ 00:25:02.568 { 00:25:02.568 "trtype": "TCP", 00:25:02.568 "adrfam": "IPv4", 00:25:02.568 "traddr": "10.0.0.2", 00:25:02.568 "trsvcid": "4420" 00:25:02.568 } 00:25:02.568 ], 00:25:02.568 "allow_any_host": true, 00:25:02.568 "hosts": [], 00:25:02.568 "serial_number": "SPDK00000000000001", 00:25:02.568 "model_number": "SPDK bdev Controller", 00:25:02.568 "max_namespaces": 32, 00:25:02.568 "min_cntlid": 1, 00:25:02.568 "max_cntlid": 65519, 00:25:02.568 "namespaces": [ 00:25:02.568 { 00:25:02.568 "nsid": 1, 00:25:02.568 "bdev_name": "Malloc0", 00:25:02.568 "name": "Malloc0", 00:25:02.568 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:02.568 "eui64": "ABCDEF0123456789", 00:25:02.568 "uuid": "da35657b-44b3-4c15-a4cd-bcffcdd00660" 00:25:02.568 } 00:25:02.568 ] 00:25:02.568 } 00:25:02.568 ] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.568 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:02.568 [2024-11-20 15:34:51.391604] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:25:02.568 [2024-11-20 15:34:51.391651] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697561 ] 00:25:02.568 [2024-11-20 15:34:51.447868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:02.568 [2024-11-20 15:34:51.447944] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:02.568 [2024-11-20 15:34:51.447950] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:02.568 [2024-11-20 15:34:51.447970] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:02.568 [2024-11-20 15:34:51.447983] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:02.568 [2024-11-20 15:34:51.451682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:02.568 [2024-11-20 15:34:51.451729] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1443690 0 00:25:02.568 [2024-11-20 15:34:51.459181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:02.568 [2024-11-20 15:34:51.459199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:02.568 [2024-11-20 15:34:51.459204] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:02.568 [2024-11-20 15:34:51.459208] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:02.568 [2024-11-20 15:34:51.459252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.568 [2024-11-20 15:34:51.459264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.568 [2024-11-20 15:34:51.459269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.568 [2024-11-20 15:34:51.459286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:02.568 [2024-11-20 15:34:51.459309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.568 [2024-11-20 15:34:51.467177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.568 [2024-11-20 15:34:51.467189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.568 [2024-11-20 15:34:51.467193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.568 [2024-11-20 15:34:51.467198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.568 [2024-11-20 15:34:51.467214] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:02.568 [2024-11-20 15:34:51.467223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:02.568 [2024-11-20 15:34:51.467228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:02.568 [2024-11-20 15:34:51.467248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.568 [2024-11-20 15:34:51.467252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.568 [2024-11-20 15:34:51.467255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.568 [2024-11-20 15:34:51.467264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.568 [2024-11-20 15:34:51.467280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.568 [2024-11-20 15:34:51.467472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.568 [2024-11-20 15:34:51.467479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.568 [2024-11-20 15:34:51.467482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.568 [2024-11-20 15:34:51.467487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.467493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:02.569 [2024-11-20 15:34:51.467501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:02.569 [2024-11-20 15:34:51.467509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.467512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.467516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.467523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.569 [2024-11-20 15:34:51.467534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.467745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.467753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.467756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.467760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.467766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:02.569 [2024-11-20 15:34:51.467775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:02.569 [2024-11-20 15:34:51.467781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.467789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.467793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.467799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.569 [2024-11-20 15:34:51.467810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.468019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.468026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.468030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.468039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:02.569 [2024-11-20 15:34:51.468049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.468064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.569 [2024-11-20 15:34:51.468074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.468260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.468267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.468270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.468279] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:02.569 [2024-11-20 15:34:51.468285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:02.569 [2024-11-20 15:34:51.468292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:02.569 [2024-11-20 15:34:51.468405] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:02.569 [2024-11-20 15:34:51.468411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:02.569 [2024-11-20 15:34:51.468421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.468436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.569 [2024-11-20 15:34:51.468447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.468639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.468646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.468649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.468658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:02.569 [2024-11-20 15:34:51.468671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.468685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.569 [2024-11-20 15:34:51.468696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.468878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.468884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.468887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.468896] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:02.569 [2024-11-20 15:34:51.468901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:02.569 [2024-11-20 15:34:51.468909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:02.569 [2024-11-20 15:34:51.468918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:02.569 [2024-11-20 15:34:51.468928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.468932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.468939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.569 [2024-11-20 15:34:51.468950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.469186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.569 [2024-11-20 15:34:51.469194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.569 [2024-11-20 15:34:51.469198] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.469203] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1443690): datao=0, datal=4096, cccid=0 00:25:02.569 [2024-11-20 15:34:51.469208] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14a5100) on tqpair(0x1443690): expected_datao=0, payload_size=4096 00:25:02.569 [2024-11-20 15:34:51.469213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.469228] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.469234] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.511353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.511357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.511372] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:02.569 [2024-11-20 15:34:51.511377] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:02.569 [2024-11-20 15:34:51.511382] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:02.569 [2024-11-20 15:34:51.511393] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:02.569 [2024-11-20 15:34:51.511402] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:02.569 [2024-11-20 15:34:51.511407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:02.569 [2024-11-20 15:34:51.511419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:02.569 [2024-11-20 15:34:51.511427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.511443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:02.569 [2024-11-20 15:34:51.511457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.569 [2024-11-20 15:34:51.511677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.569 [2024-11-20 15:34:51.511684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.569 [2024-11-20 15:34:51.511687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.569 [2024-11-20 15:34:51.511700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.569 [2024-11-20 15:34:51.511708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1443690) 00:25:02.569 [2024-11-20 15:34:51.511714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.570 [2024-11-20 15:34:51.511721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.511734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.570 [2024-11-20 15:34:51.511740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.511753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.570 [2024-11-20 15:34:51.511759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.511773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.570 [2024-11-20 15:34:51.511777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:02.570 [2024-11-20 15:34:51.511786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:02.570 [2024-11-20 15:34:51.511793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.511796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.511806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.570 [2024-11-20 15:34:51.511818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5100, cid 0, qid 0 00:25:02.570 [2024-11-20 15:34:51.511823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5280, cid 1, qid 0 00:25:02.570 [2024-11-20 15:34:51.511828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5400, cid 2, qid 0 00:25:02.570 [2024-11-20 15:34:51.511833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.570 [2024-11-20 15:34:51.511838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5700, cid 4, qid 0 00:25:02.570 [2024-11-20 15:34:51.512079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.570 [2024-11-20 15:34:51.512086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.570 [2024-11-20 15:34:51.512089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5700) on tqpair=0x1443690 00:25:02.570 [2024-11-20 15:34:51.512102] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:02.570 [2024-11-20 15:34:51.512107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:02.570 [2024-11-20 15:34:51.512119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.512130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.570 [2024-11-20 15:34:51.512140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5700, cid 4, qid 0 00:25:02.570 [2024-11-20 15:34:51.512355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.570 [2024-11-20 15:34:51.512362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.570 [2024-11-20 15:34:51.512365] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1443690): datao=0, datal=4096, cccid=4 00:25:02.570 [2024-11-20 15:34:51.512374] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14a5700) on tqpair(0x1443690): expected_datao=0, payload_size=4096 00:25:02.570 [2024-11-20 15:34:51.512378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512385] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512389] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.570 [2024-11-20 15:34:51.512580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.570 [2024-11-20 15:34:51.512584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5700) on tqpair=0x1443690 00:25:02.570 [2024-11-20 15:34:51.512602] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:02.570 [2024-11-20 15:34:51.512630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.512641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.570 [2024-11-20 15:34:51.512649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1443690) 00:25:02.570 [2024-11-20 15:34:51.512665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.570 [2024-11-20 15:34:51.512680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5700, cid 4, qid 0 00:25:02.570 [2024-11-20 15:34:51.512686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5880, cid 5, qid 0 00:25:02.570 [2024-11-20 15:34:51.512932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.570 [2024-11-20 15:34:51.512939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.570 [2024-11-20 15:34:51.512943] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512946] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1443690): datao=0, datal=1024, cccid=4 00:25:02.570 [2024-11-20 15:34:51.512951] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14a5700) on tqpair(0x1443690): expected_datao=0, payload_size=1024 00:25:02.570 [2024-11-20 15:34:51.512955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512962] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512966] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.570 [2024-11-20 15:34:51.512977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.570 [2024-11-20 15:34:51.512981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.570 [2024-11-20 15:34:51.512985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5880) on tqpair=0x1443690 00:25:02.834 [2024-11-20 15:34:51.555171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.834 [2024-11-20 15:34:51.555189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.834 [2024-11-20 15:34:51.555194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5700) on tqpair=0x1443690 00:25:02.834 [2024-11-20 15:34:51.555214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1443690) 00:25:02.834 [2024-11-20 15:34:51.555226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.834 [2024-11-20 15:34:51.555246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5700, cid 4, qid 0 00:25:02.834 [2024-11-20 15:34:51.555535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.834 [2024-11-20 15:34:51.555543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.834 [2024-11-20 15:34:51.555546] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555551] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1443690): datao=0, datal=3072, cccid=4 00:25:02.834 [2024-11-20 15:34:51.555555] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14a5700) on tqpair(0x1443690): expected_datao=0, payload_size=3072 00:25:02.834 [2024-11-20 15:34:51.555560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555567] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555571] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.834 [2024-11-20 15:34:51.555700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.834 [2024-11-20 15:34:51.555704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5700) on tqpair=0x1443690 00:25:02.834 [2024-11-20 15:34:51.555717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.555726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1443690) 00:25:02.834 [2024-11-20 15:34:51.555733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.834 [2024-11-20 15:34:51.555747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5700, cid 4, qid 0 00:25:02.834 [2024-11-20 15:34:51.556001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.834 [2024-11-20 15:34:51.556008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.834 [2024-11-20 15:34:51.556012] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.556016] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1443690): datao=0, datal=8, cccid=4 00:25:02.834 [2024-11-20 15:34:51.556020] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14a5700) on tqpair(0x1443690): expected_datao=0, payload_size=8 00:25:02.834 [2024-11-20 15:34:51.556024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.556031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.556034] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.596312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.834 [2024-11-20 15:34:51.596324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.834 [2024-11-20 15:34:51.596328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.834 [2024-11-20 15:34:51.596332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5700) on tqpair=0x1443690 00:25:02.834 ===================================================== 00:25:02.834 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:02.834 ===================================================== 00:25:02.834 Controller Capabilities/Features 00:25:02.834 ================================ 00:25:02.834 Vendor ID: 0000 00:25:02.834 Subsystem Vendor ID: 0000 00:25:02.834 Serial Number: .................... 00:25:02.835 Model Number: ........................................ 00:25:02.835 Firmware Version: 25.01 00:25:02.835 Recommended Arb Burst: 0 00:25:02.835 IEEE OUI Identifier: 00 00 00 00:25:02.835 Multi-path I/O 00:25:02.835 May have multiple subsystem ports: No 00:25:02.835 May have multiple controllers: No 00:25:02.835 Associated with SR-IOV VF: No 00:25:02.835 Max Data Transfer Size: 131072 00:25:02.835 Max Number of Namespaces: 0 00:25:02.835 Max Number of I/O Queues: 1024 00:25:02.835 NVMe Specification Version (VS): 1.3 00:25:02.835 NVMe Specification Version (Identify): 1.3 00:25:02.835 Maximum Queue Entries: 128 00:25:02.835 Contiguous Queues Required: Yes 00:25:02.835 Arbitration Mechanisms Supported 00:25:02.835 Weighted Round Robin: Not Supported 00:25:02.835 Vendor Specific: Not Supported 00:25:02.835 Reset Timeout: 15000 ms 00:25:02.835 Doorbell Stride: 4 bytes 00:25:02.835 NVM Subsystem Reset: Not Supported 00:25:02.835 Command Sets Supported 00:25:02.835 NVM Command Set: Supported 00:25:02.835 Boot Partition: Not Supported 00:25:02.835 Memory Page Size Minimum: 4096 bytes 00:25:02.835 Memory Page Size Maximum: 4096 bytes 00:25:02.835 Persistent Memory Region: Not Supported 00:25:02.835 Optional Asynchronous Events Supported 00:25:02.835 Namespace Attribute Notices: Not Supported 00:25:02.835 Firmware Activation Notices: Not Supported 00:25:02.835 ANA Change Notices: Not Supported 00:25:02.835 PLE Aggregate Log Change Notices: Not Supported 00:25:02.835 LBA Status Info Alert Notices: Not Supported 00:25:02.835 EGE Aggregate Log Change Notices: Not Supported 00:25:02.835 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.835 Zone Descriptor Change Notices: Not Supported 00:25:02.835 Discovery Log Change Notices: Supported 00:25:02.835 Controller Attributes 00:25:02.835 128-bit Host Identifier: Not Supported 00:25:02.835 Non-Operational Permissive Mode: Not Supported 00:25:02.835 NVM Sets: Not Supported 00:25:02.835 Read Recovery Levels: Not Supported 00:25:02.835 Endurance Groups: Not Supported 00:25:02.835 Predictable Latency Mode: Not Supported 00:25:02.835 Traffic Based Keep ALive: Not Supported 00:25:02.835 Namespace Granularity: Not Supported 00:25:02.835 SQ Associations: Not Supported 00:25:02.835 UUID List: Not Supported 00:25:02.835 Multi-Domain Subsystem: Not Supported 00:25:02.835 Fixed Capacity Management: Not Supported 00:25:02.835 Variable Capacity Management: Not Supported 00:25:02.835 Delete Endurance Group: Not Supported 00:25:02.835 Delete NVM Set: Not Supported 00:25:02.835 Extended LBA Formats Supported: Not Supported 00:25:02.835 Flexible Data Placement Supported: Not Supported 00:25:02.835 00:25:02.835 Controller Memory Buffer Support 00:25:02.835 ================================ 00:25:02.835 Supported: No 00:25:02.835 00:25:02.835 Persistent Memory Region Support 00:25:02.835 ================================ 00:25:02.835 Supported: No 00:25:02.835 00:25:02.835 Admin Command Set Attributes 00:25:02.835 ============================ 00:25:02.835 Security Send/Receive: Not Supported 00:25:02.835 Format NVM: Not Supported 00:25:02.835 Firmware Activate/Download: Not Supported 00:25:02.835 Namespace Management: Not Supported 00:25:02.835 Device Self-Test: Not Supported 00:25:02.835 Directives: Not Supported 00:25:02.835 NVMe-MI: Not Supported 00:25:02.835 Virtualization Management: Not Supported 00:25:02.835 Doorbell Buffer Config: Not Supported 00:25:02.835 Get LBA Status Capability: Not Supported 00:25:02.835 Command & Feature Lockdown Capability: Not Supported 00:25:02.835 Abort Command Limit: 1 00:25:02.835 Async Event Request Limit: 4 00:25:02.835 Number of Firmware Slots: N/A 00:25:02.835 Firmware Slot 1 Read-Only: N/A 00:25:02.835 Firmware Activation Without Reset: N/A 00:25:02.835 Multiple Update Detection Support: N/A 00:25:02.835 Firmware Update Granularity: No Information Provided 00:25:02.835 Per-Namespace SMART Log: No 00:25:02.835 Asymmetric Namespace Access Log Page: Not Supported 00:25:02.835 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:02.835 Command Effects Log Page: Not Supported 00:25:02.835 Get Log Page Extended Data: Supported 00:25:02.835 Telemetry Log Pages: Not Supported 00:25:02.835 Persistent Event Log Pages: Not Supported 00:25:02.835 Supported Log Pages Log Page: May Support 00:25:02.835 Commands Supported & Effects Log Page: Not Supported 00:25:02.835 Feature Identifiers & Effects Log Page:May Support 00:25:02.835 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.835 Data Area 4 for Telemetry Log: Not Supported 00:25:02.835 Error Log Page Entries Supported: 128 00:25:02.835 Keep Alive: Not Supported 00:25:02.835 00:25:02.835 NVM Command Set Attributes 00:25:02.835 ========================== 00:25:02.835 Submission Queue Entry Size 00:25:02.835 Max: 1 00:25:02.835 Min: 1 00:25:02.835 Completion Queue Entry Size 00:25:02.835 Max: 1 00:25:02.835 Min: 1 00:25:02.835 Number of Namespaces: 0 00:25:02.835 Compare Command: Not Supported 00:25:02.835 Write Uncorrectable Command: Not Supported 00:25:02.835 Dataset Management Command: Not Supported 00:25:02.835 Write Zeroes Command: Not Supported 00:25:02.835 Set Features Save Field: Not Supported 00:25:02.835 Reservations: Not Supported 00:25:02.835 Timestamp: Not Supported 00:25:02.835 Copy: Not Supported 00:25:02.835 Volatile Write Cache: Not Present 00:25:02.835 Atomic Write Unit (Normal): 1 00:25:02.835 Atomic Write Unit (PFail): 1 00:25:02.835 Atomic Compare & Write Unit: 1 00:25:02.835 Fused Compare & Write: Supported 00:25:02.835 Scatter-Gather List 00:25:02.835 SGL Command Set: Supported 00:25:02.835 SGL Keyed: Supported 00:25:02.835 SGL Bit Bucket Descriptor: Not Supported 00:25:02.835 SGL Metadata Pointer: Not Supported 00:25:02.835 Oversized SGL: Not Supported 00:25:02.835 SGL Metadata Address: Not Supported 00:25:02.835 SGL Offset: Supported 00:25:02.835 Transport SGL Data Block: Not Supported 00:25:02.835 Replay Protected Memory Block: Not Supported 00:25:02.835 00:25:02.835 Firmware Slot Information 00:25:02.835 ========================= 00:25:02.835 Active slot: 0 00:25:02.835 00:25:02.835 00:25:02.835 Error Log 00:25:02.835 ========= 00:25:02.835 00:25:02.835 Active Namespaces 00:25:02.835 ================= 00:25:02.835 Discovery Log Page 00:25:02.835 ================== 00:25:02.835 Generation Counter: 2 00:25:02.835 Number of Records: 2 00:25:02.835 Record Format: 0 00:25:02.835 00:25:02.835 Discovery Log Entry 0 00:25:02.835 ---------------------- 00:25:02.835 Transport Type: 3 (TCP) 00:25:02.835 Address Family: 1 (IPv4) 00:25:02.835 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:02.835 Entry Flags: 00:25:02.835 Duplicate Returned Information: 1 00:25:02.835 Explicit Persistent Connection Support for Discovery: 1 00:25:02.835 Transport Requirements: 00:25:02.835 Secure Channel: Not Required 00:25:02.835 Port ID: 0 (0x0000) 00:25:02.835 Controller ID: 65535 (0xffff) 00:25:02.835 Admin Max SQ Size: 128 00:25:02.835 Transport Service Identifier: 4420 00:25:02.835 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:02.835 Transport Address: 10.0.0.2 00:25:02.835 Discovery Log Entry 1 00:25:02.835 ---------------------- 00:25:02.835 Transport Type: 3 (TCP) 00:25:02.835 Address Family: 1 (IPv4) 00:25:02.835 Subsystem Type: 2 (NVM Subsystem) 00:25:02.835 Entry Flags: 00:25:02.835 Duplicate Returned Information: 0 00:25:02.835 Explicit Persistent Connection Support for Discovery: 0 00:25:02.835 Transport Requirements: 00:25:02.835 Secure Channel: Not Required 00:25:02.835 Port ID: 0 (0x0000) 00:25:02.835 Controller ID: 65535 (0xffff) 00:25:02.835 Admin Max SQ Size: 128 00:25:02.835 Transport Service Identifier: 4420 00:25:02.835 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:02.835 Transport Address: 10.0.0.2 [2024-11-20 15:34:51.596440] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:02.835 [2024-11-20 15:34:51.596451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5100) on tqpair=0x1443690 00:25:02.835 [2024-11-20 15:34:51.596460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.835 [2024-11-20 15:34:51.596467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5280) on tqpair=0x1443690 00:25:02.835 [2024-11-20 15:34:51.596473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.835 [2024-11-20 15:34:51.596482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5400) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.596487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.836 [2024-11-20 15:34:51.596495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.596501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.836 [2024-11-20 15:34:51.596515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.596519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.596523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.596531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.596547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.596785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.596792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.596797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.596801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.596812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.596820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.596827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.596836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.596852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.597098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.597105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.597108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.597119] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:02.836 [2024-11-20 15:34:51.597124] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:02.836 [2024-11-20 15:34:51.597134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.597149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.597183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.597338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.597344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.597347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.597363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.597377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.597388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.597595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.597601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.597604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.597619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.597633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.597643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.597843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.597850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.597853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.597870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.597877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.597884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.597894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.598095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.598102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.598105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.598109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.598119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.598123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.598126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1443690) 00:25:02.836 [2024-11-20 15:34:51.598133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.598145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14a5580, cid 3, qid 0 00:25:02.836 [2024-11-20 15:34:51.602169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.602193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.602197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.602201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14a5580) on tqpair=0x1443690 00:25:02.836 [2024-11-20 15:34:51.602210] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:02.836 00:25:02.836 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:02.836 [2024-11-20 15:34:51.649484] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:25:02.836 [2024-11-20 15:34:51.649529] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697682 ] 00:25:02.836 [2024-11-20 15:34:51.705683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:02.836 [2024-11-20 15:34:51.705744] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:02.836 [2024-11-20 15:34:51.705750] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:02.836 [2024-11-20 15:34:51.705768] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:02.836 [2024-11-20 15:34:51.705779] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:02.836 [2024-11-20 15:34:51.709468] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:02.836 [2024-11-20 15:34:51.709502] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x141a690 0 00:25:02.836 [2024-11-20 15:34:51.717175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:02.836 [2024-11-20 15:34:51.717195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:02.836 [2024-11-20 15:34:51.717200] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:02.836 [2024-11-20 15:34:51.717203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:02.836 [2024-11-20 15:34:51.717239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.717245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.717249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.836 [2024-11-20 15:34:51.717262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:02.836 [2024-11-20 15:34:51.717286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.836 [2024-11-20 15:34:51.725171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.836 [2024-11-20 15:34:51.725180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.836 [2024-11-20 15:34:51.725184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.725189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.836 [2024-11-20 15:34:51.725199] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:02.836 [2024-11-20 15:34:51.725206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:02.836 [2024-11-20 15:34:51.725212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:02.836 [2024-11-20 15:34:51.725227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.725231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.836 [2024-11-20 15:34:51.725235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.836 [2024-11-20 15:34:51.725243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.836 [2024-11-20 15:34:51.725259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.836 [2024-11-20 15:34:51.725442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.725449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.725453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.725457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.725462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:02.837 [2024-11-20 15:34:51.725470] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:02.837 [2024-11-20 15:34:51.725478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.725482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.725486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.725493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.837 [2024-11-20 15:34:51.725503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.725726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.725733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.725738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.725742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.725747] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:02.837 [2024-11-20 15:34:51.725760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:02.837 [2024-11-20 15:34:51.725768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.725772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.725776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.725783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.837 [2024-11-20 15:34:51.725793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.725998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.726005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.726009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.726018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:02.837 [2024-11-20 15:34:51.726028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.726044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.837 [2024-11-20 15:34:51.726055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.726238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.726245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.726249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.726257] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:02.837 [2024-11-20 15:34:51.726262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:02.837 [2024-11-20 15:34:51.726270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:02.837 [2024-11-20 15:34:51.726380] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:02.837 [2024-11-20 15:34:51.726385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:02.837 [2024-11-20 15:34:51.726393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.726407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.837 [2024-11-20 15:34:51.726418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.726612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.726618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.726621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.726632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:02.837 [2024-11-20 15:34:51.726643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.726658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.837 [2024-11-20 15:34:51.726668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.726882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.726888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.726891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.726900] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:02.837 [2024-11-20 15:34:51.726905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:02.837 [2024-11-20 15:34:51.726913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:02.837 [2024-11-20 15:34:51.726921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:02.837 [2024-11-20 15:34:51.726930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.726934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.726941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.837 [2024-11-20 15:34:51.726952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.727201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.837 [2024-11-20 15:34:51.727208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.837 [2024-11-20 15:34:51.727212] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727217] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=4096, cccid=0 00:25:02.837 [2024-11-20 15:34:51.727221] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147c100) on tqpair(0x141a690): expected_datao=0, payload_size=4096 00:25:02.837 [2024-11-20 15:34:51.727226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727238] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.727405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.727408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.727420] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:02.837 [2024-11-20 15:34:51.727425] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:02.837 [2024-11-20 15:34:51.727435] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:02.837 [2024-11-20 15:34:51.727443] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:02.837 [2024-11-20 15:34:51.727447] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:02.837 [2024-11-20 15:34:51.727452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:02.837 [2024-11-20 15:34:51.727464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:02.837 [2024-11-20 15:34:51.727470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.727486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:02.837 [2024-11-20 15:34:51.727497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.837 [2024-11-20 15:34:51.727674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.837 [2024-11-20 15:34:51.727681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.837 [2024-11-20 15:34:51.727684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:02.837 [2024-11-20 15:34:51.727695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.837 [2024-11-20 15:34:51.727703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141a690) 00:25:02.837 [2024-11-20 15:34:51.727709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.838 [2024-11-20 15:34:51.727715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.727729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.838 [2024-11-20 15:34:51.727735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.727748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.838 [2024-11-20 15:34:51.727754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.727767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.838 [2024-11-20 15:34:51.727772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.727781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.727787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.727795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.727802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.838 [2024-11-20 15:34:51.727814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c100, cid 0, qid 0 00:25:02.838 [2024-11-20 15:34:51.727819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c280, cid 1, qid 0 00:25:02.838 [2024-11-20 15:34:51.727824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c400, cid 2, qid 0 00:25:02.838 [2024-11-20 15:34:51.727829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c580, cid 3, qid 0 00:25:02.838 [2024-11-20 15:34:51.727834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:02.838 [2024-11-20 15:34:51.728082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.838 [2024-11-20 15:34:51.728089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.838 [2024-11-20 15:34:51.728092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:02.838 [2024-11-20 15:34:51.728103] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:02.838 [2024-11-20 15:34:51.728109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.728118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.728125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.728132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.728146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:02.838 [2024-11-20 15:34:51.728157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:02.838 [2024-11-20 15:34:51.728329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.838 [2024-11-20 15:34:51.728335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.838 [2024-11-20 15:34:51.728339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:02.838 [2024-11-20 15:34:51.728410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.728420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.728428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.728438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.838 [2024-11-20 15:34:51.728450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:02.838 [2024-11-20 15:34:51.728634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.838 [2024-11-20 15:34:51.728640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.838 [2024-11-20 15:34:51.728646] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728650] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=4096, cccid=4 00:25:02.838 [2024-11-20 15:34:51.728655] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147c700) on tqpair(0x141a690): expected_datao=0, payload_size=4096 00:25:02.838 [2024-11-20 15:34:51.728659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728695] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.728699] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.773169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:02.838 [2024-11-20 15:34:51.773180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:02.838 [2024-11-20 15:34:51.773183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.773187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:02.838 [2024-11-20 15:34:51.773199] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:02.838 [2024-11-20 15:34:51.773217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.773228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:02.838 [2024-11-20 15:34:51.773235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.773239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:02.838 [2024-11-20 15:34:51.773246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.838 [2024-11-20 15:34:51.773259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:02.838 [2024-11-20 15:34:51.773486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:02.838 [2024-11-20 15:34:51.773493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:02.838 [2024-11-20 15:34:51.773497] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.773500] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=4096, cccid=4 00:25:02.838 [2024-11-20 15:34:51.773505] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147c700) on tqpair(0x141a690): expected_datao=0, payload_size=4096 00:25:02.838 [2024-11-20 15:34:51.773509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.773545] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:02.838 [2024-11-20 15:34:51.773549] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.815353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.103 [2024-11-20 15:34:51.815364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.103 [2024-11-20 15:34:51.815368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.815372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:03.103 [2024-11-20 15:34:51.815389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.815399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.815407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.815411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:03.103 [2024-11-20 15:34:51.815418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.103 [2024-11-20 15:34:51.815434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:03.103 [2024-11-20 15:34:51.815616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:03.103 [2024-11-20 15:34:51.815623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:03.103 [2024-11-20 15:34:51.815626] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.815630] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=4096, cccid=4 00:25:03.103 [2024-11-20 15:34:51.815634] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147c700) on tqpair(0x141a690): expected_datao=0, payload_size=4096 00:25:03.103 [2024-11-20 15:34:51.815639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.815666] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.815670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.103 [2024-11-20 15:34:51.861175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.103 [2024-11-20 15:34:51.861179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:03.103 [2024-11-20 15:34:51.861192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861235] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:03.103 [2024-11-20 15:34:51.861239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:03.103 [2024-11-20 15:34:51.861245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:03.103 [2024-11-20 15:34:51.861262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:03.103 [2024-11-20 15:34:51.861273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.103 [2024-11-20 15:34:51.861280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x141a690) 00:25:03.103 [2024-11-20 15:34:51.861294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.103 [2024-11-20 15:34:51.861310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:03.103 [2024-11-20 15:34:51.861315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c880, cid 5, qid 0 00:25:03.103 [2024-11-20 15:34:51.861590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.103 [2024-11-20 15:34:51.861600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.103 [2024-11-20 15:34:51.861603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:03.103 [2024-11-20 15:34:51.861614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.103 [2024-11-20 15:34:51.861620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.103 [2024-11-20 15:34:51.861624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c880) on tqpair=0x141a690 00:25:03.103 [2024-11-20 15:34:51.861637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.103 [2024-11-20 15:34:51.861641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.861647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.861658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c880, cid 5, qid 0 00:25:03.104 [2024-11-20 15:34:51.861839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.104 [2024-11-20 15:34:51.861845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.104 [2024-11-20 15:34:51.861849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.861853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c880) on tqpair=0x141a690 00:25:03.104 [2024-11-20 15:34:51.861862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.861866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.861872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.861882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c880, cid 5, qid 0 00:25:03.104 [2024-11-20 15:34:51.862086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.104 [2024-11-20 15:34:51.862092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.104 [2024-11-20 15:34:51.862096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c880) on tqpair=0x141a690 00:25:03.104 [2024-11-20 15:34:51.862109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.862119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.862129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c880, cid 5, qid 0 00:25:03.104 [2024-11-20 15:34:51.862313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.104 [2024-11-20 15:34:51.862320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.104 [2024-11-20 15:34:51.862324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c880) on tqpair=0x141a690 00:25:03.104 [2024-11-20 15:34:51.862344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.862356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.862363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.862375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.862383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.862393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.862401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x141a690) 00:25:03.104 [2024-11-20 15:34:51.862411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.104 [2024-11-20 15:34:51.862424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c880, cid 5, qid 0 00:25:03.104 [2024-11-20 15:34:51.862429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c700, cid 4, qid 0 00:25:03.104 [2024-11-20 15:34:51.862433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147ca00, cid 6, qid 0 00:25:03.104 [2024-11-20 15:34:51.862438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147cb80, cid 7, qid 0 00:25:03.104 [2024-11-20 15:34:51.862735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:03.104 [2024-11-20 15:34:51.862742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:03.104 [2024-11-20 15:34:51.862745] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862749] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=8192, cccid=5 00:25:03.104 [2024-11-20 15:34:51.862754] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147c880) on tqpair(0x141a690): expected_datao=0, payload_size=8192 00:25:03.104 [2024-11-20 15:34:51.862758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862856] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862860] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:03.104 [2024-11-20 15:34:51.862872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:03.104 [2024-11-20 15:34:51.862875] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862879] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=512, cccid=4 00:25:03.104 [2024-11-20 15:34:51.862884] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147c700) on tqpair(0x141a690): expected_datao=0, payload_size=512 00:25:03.104 [2024-11-20 15:34:51.862888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862894] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862898] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:03.104 [2024-11-20 15:34:51.862909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:03.104 [2024-11-20 15:34:51.862913] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862916] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=512, cccid=6 00:25:03.104 [2024-11-20 15:34:51.862921] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147ca00) on tqpair(0x141a690): expected_datao=0, payload_size=512 00:25:03.104 [2024-11-20 15:34:51.862925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862931] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862935] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:03.104 [2024-11-20 15:34:51.862952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:03.104 [2024-11-20 15:34:51.862955] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862959] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141a690): datao=0, datal=4096, cccid=7 00:25:03.104 [2024-11-20 15:34:51.862963] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x147cb80) on tqpair(0x141a690): expected_datao=0, payload_size=4096 00:25:03.104 [2024-11-20 15:34:51.862967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862974] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862978] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.104 [2024-11-20 15:34:51.862992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.104 [2024-11-20 15:34:51.862996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.104 [2024-11-20 15:34:51.862999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c880) on tqpair=0x141a690 00:25:03.104 [2024-11-20 15:34:51.863012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.104 [2024-11-20 15:34:51.863018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.105 [2024-11-20 15:34:51.863022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.105 [2024-11-20 15:34:51.863025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c700) on tqpair=0x141a690 00:25:03.105 [2024-11-20 15:34:51.863036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.105 [2024-11-20 15:34:51.863042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.105 [2024-11-20 15:34:51.863045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.105 [2024-11-20 15:34:51.863049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147ca00) on tqpair=0x141a690 00:25:03.105 [2024-11-20 15:34:51.863056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.105 [2024-11-20 15:34:51.863062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.105 [2024-11-20 15:34:51.863065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.105 [2024-11-20 15:34:51.863069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147cb80) on tqpair=0x141a690 00:25:03.105 ===================================================== 00:25:03.105 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:03.105 ===================================================== 00:25:03.105 Controller Capabilities/Features 00:25:03.105 ================================ 00:25:03.105 Vendor ID: 8086 00:25:03.105 Subsystem Vendor ID: 8086 00:25:03.105 Serial Number: SPDK00000000000001 00:25:03.105 Model Number: SPDK bdev Controller 00:25:03.105 Firmware Version: 25.01 00:25:03.105 Recommended Arb Burst: 6 00:25:03.105 IEEE OUI Identifier: e4 d2 5c 00:25:03.105 Multi-path I/O 00:25:03.105 May have multiple subsystem ports: Yes 00:25:03.105 May have multiple controllers: Yes 00:25:03.105 Associated with SR-IOV VF: No 00:25:03.105 Max Data Transfer Size: 131072 00:25:03.105 Max Number of Namespaces: 32 00:25:03.105 Max Number of I/O Queues: 127 00:25:03.105 NVMe Specification Version (VS): 1.3 00:25:03.105 NVMe Specification Version (Identify): 1.3 00:25:03.105 Maximum Queue Entries: 128 00:25:03.105 Contiguous Queues Required: Yes 00:25:03.105 Arbitration Mechanisms Supported 00:25:03.105 Weighted Round Robin: Not Supported 00:25:03.105 Vendor Specific: Not Supported 00:25:03.105 Reset Timeout: 15000 ms 00:25:03.105 Doorbell Stride: 4 bytes 00:25:03.105 NVM Subsystem Reset: Not Supported 00:25:03.105 Command Sets Supported 00:25:03.105 NVM Command Set: Supported 00:25:03.105 Boot Partition: Not Supported 00:25:03.105 Memory Page Size Minimum: 4096 bytes 00:25:03.105 Memory Page Size Maximum: 4096 bytes 00:25:03.105 Persistent Memory Region: Not Supported 00:25:03.105 Optional Asynchronous Events Supported 00:25:03.105 Namespace Attribute Notices: Supported 00:25:03.105 Firmware Activation Notices: Not Supported 00:25:03.105 ANA Change Notices: Not Supported 00:25:03.105 PLE Aggregate Log Change Notices: Not Supported 00:25:03.105 LBA Status Info Alert Notices: Not Supported 00:25:03.105 EGE Aggregate Log Change Notices: Not Supported 00:25:03.105 Normal NVM Subsystem Shutdown event: Not Supported 00:25:03.105 Zone Descriptor Change Notices: Not Supported 00:25:03.105 Discovery Log Change Notices: Not Supported 00:25:03.105 Controller Attributes 00:25:03.105 128-bit Host Identifier: Supported 00:25:03.105 Non-Operational Permissive Mode: Not Supported 00:25:03.105 NVM Sets: Not Supported 00:25:03.105 Read Recovery Levels: Not Supported 00:25:03.105 Endurance Groups: Not Supported 00:25:03.105 Predictable Latency Mode: Not Supported 00:25:03.105 Traffic Based Keep ALive: Not Supported 00:25:03.105 Namespace Granularity: Not Supported 00:25:03.105 SQ Associations: Not Supported 00:25:03.105 UUID List: Not Supported 00:25:03.105 Multi-Domain Subsystem: Not Supported 00:25:03.105 Fixed Capacity Management: Not Supported 00:25:03.105 Variable Capacity Management: Not Supported 00:25:03.105 Delete Endurance Group: Not Supported 00:25:03.105 Delete NVM Set: Not Supported 00:25:03.105 Extended LBA Formats Supported: Not Supported 00:25:03.105 Flexible Data Placement Supported: Not Supported 00:25:03.105 00:25:03.105 Controller Memory Buffer Support 00:25:03.105 ================================ 00:25:03.105 Supported: No 00:25:03.105 00:25:03.105 Persistent Memory Region Support 00:25:03.105 ================================ 00:25:03.105 Supported: No 00:25:03.105 00:25:03.105 Admin Command Set Attributes 00:25:03.105 ============================ 00:25:03.105 Security Send/Receive: Not Supported 00:25:03.105 Format NVM: Not Supported 00:25:03.105 Firmware Activate/Download: Not Supported 00:25:03.105 Namespace Management: Not Supported 00:25:03.105 Device Self-Test: Not Supported 00:25:03.105 Directives: Not Supported 00:25:03.105 NVMe-MI: Not Supported 00:25:03.105 Virtualization Management: Not Supported 00:25:03.105 Doorbell Buffer Config: Not Supported 00:25:03.105 Get LBA Status Capability: Not Supported 00:25:03.105 Command & Feature Lockdown Capability: Not Supported 00:25:03.105 Abort Command Limit: 4 00:25:03.105 Async Event Request Limit: 4 00:25:03.105 Number of Firmware Slots: N/A 00:25:03.105 Firmware Slot 1 Read-Only: N/A 00:25:03.105 Firmware Activation Without Reset: N/A 00:25:03.105 Multiple Update Detection Support: N/A 00:25:03.105 Firmware Update Granularity: No Information Provided 00:25:03.105 Per-Namespace SMART Log: No 00:25:03.105 Asymmetric Namespace Access Log Page: Not Supported 00:25:03.105 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:03.105 Command Effects Log Page: Supported 00:25:03.105 Get Log Page Extended Data: Supported 00:25:03.105 Telemetry Log Pages: Not Supported 00:25:03.105 Persistent Event Log Pages: Not Supported 00:25:03.105 Supported Log Pages Log Page: May Support 00:25:03.105 Commands Supported & Effects Log Page: Not Supported 00:25:03.105 Feature Identifiers & Effects Log Page:May Support 00:25:03.105 NVMe-MI Commands & Effects Log Page: May Support 00:25:03.105 Data Area 4 for Telemetry Log: Not Supported 00:25:03.105 Error Log Page Entries Supported: 128 00:25:03.105 Keep Alive: Supported 00:25:03.105 Keep Alive Granularity: 10000 ms 00:25:03.105 00:25:03.105 NVM Command Set Attributes 00:25:03.105 ========================== 00:25:03.105 Submission Queue Entry Size 00:25:03.105 Max: 64 00:25:03.105 Min: 64 00:25:03.105 Completion Queue Entry Size 00:25:03.105 Max: 16 00:25:03.105 Min: 16 00:25:03.105 Number of Namespaces: 32 00:25:03.105 Compare Command: Supported 00:25:03.105 Write Uncorrectable Command: Not Supported 00:25:03.105 Dataset Management Command: Supported 00:25:03.105 Write Zeroes Command: Supported 00:25:03.105 Set Features Save Field: Not Supported 00:25:03.105 Reservations: Supported 00:25:03.105 Timestamp: Not Supported 00:25:03.105 Copy: Supported 00:25:03.105 Volatile Write Cache: Present 00:25:03.105 Atomic Write Unit (Normal): 1 00:25:03.105 Atomic Write Unit (PFail): 1 00:25:03.105 Atomic Compare & Write Unit: 1 00:25:03.105 Fused Compare & Write: Supported 00:25:03.105 Scatter-Gather List 00:25:03.105 SGL Command Set: Supported 00:25:03.105 SGL Keyed: Supported 00:25:03.105 SGL Bit Bucket Descriptor: Not Supported 00:25:03.105 SGL Metadata Pointer: Not Supported 00:25:03.105 Oversized SGL: Not Supported 00:25:03.106 SGL Metadata Address: Not Supported 00:25:03.106 SGL Offset: Supported 00:25:03.106 Transport SGL Data Block: Not Supported 00:25:03.106 Replay Protected Memory Block: Not Supported 00:25:03.106 00:25:03.106 Firmware Slot Information 00:25:03.106 ========================= 00:25:03.106 Active slot: 1 00:25:03.106 Slot 1 Firmware Revision: 25.01 00:25:03.106 00:25:03.106 00:25:03.106 Commands Supported and Effects 00:25:03.106 ============================== 00:25:03.106 Admin Commands 00:25:03.106 -------------- 00:25:03.106 Get Log Page (02h): Supported 00:25:03.106 Identify (06h): Supported 00:25:03.106 Abort (08h): Supported 00:25:03.106 Set Features (09h): Supported 00:25:03.106 Get Features (0Ah): Supported 00:25:03.106 Asynchronous Event Request (0Ch): Supported 00:25:03.106 Keep Alive (18h): Supported 00:25:03.106 I/O Commands 00:25:03.106 ------------ 00:25:03.106 Flush (00h): Supported LBA-Change 00:25:03.106 Write (01h): Supported LBA-Change 00:25:03.106 Read (02h): Supported 00:25:03.106 Compare (05h): Supported 00:25:03.106 Write Zeroes (08h): Supported LBA-Change 00:25:03.106 Dataset Management (09h): Supported LBA-Change 00:25:03.106 Copy (19h): Supported LBA-Change 00:25:03.106 00:25:03.106 Error Log 00:25:03.106 ========= 00:25:03.106 00:25:03.106 Arbitration 00:25:03.106 =========== 00:25:03.106 Arbitration Burst: 1 00:25:03.106 00:25:03.106 Power Management 00:25:03.106 ================ 00:25:03.106 Number of Power States: 1 00:25:03.106 Current Power State: Power State #0 00:25:03.106 Power State #0: 00:25:03.106 Max Power: 0.00 W 00:25:03.106 Non-Operational State: Operational 00:25:03.106 Entry Latency: Not Reported 00:25:03.106 Exit Latency: Not Reported 00:25:03.106 Relative Read Throughput: 0 00:25:03.106 Relative Read Latency: 0 00:25:03.106 Relative Write Throughput: 0 00:25:03.106 Relative Write Latency: 0 00:25:03.106 Idle Power: Not Reported 00:25:03.106 Active Power: Not Reported 00:25:03.106 Non-Operational Permissive Mode: Not Supported 00:25:03.106 00:25:03.106 Health Information 00:25:03.106 ================== 00:25:03.106 Critical Warnings: 00:25:03.106 Available Spare Space: OK 00:25:03.106 Temperature: OK 00:25:03.106 Device Reliability: OK 00:25:03.106 Read Only: No 00:25:03.106 Volatile Memory Backup: OK 00:25:03.106 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:03.106 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:03.106 Available Spare: 0% 00:25:03.106 Available Spare Threshold: 0% 00:25:03.106 Life Percentage Used:[2024-11-20 15:34:51.863176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x141a690) 00:25:03.106 [2024-11-20 15:34:51.863189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.106 [2024-11-20 15:34:51.863200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147cb80, cid 7, qid 0 00:25:03.106 [2024-11-20 15:34:51.863376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.106 [2024-11-20 15:34:51.863383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.106 [2024-11-20 15:34:51.863386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147cb80) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863424] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:03.106 [2024-11-20 15:34:51.863434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c100) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.106 [2024-11-20 15:34:51.863446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c280) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.106 [2024-11-20 15:34:51.863458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c400) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.106 [2024-11-20 15:34:51.863468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c580) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.106 [2024-11-20 15:34:51.863481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141a690) 00:25:03.106 [2024-11-20 15:34:51.863496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.106 [2024-11-20 15:34:51.863508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c580, cid 3, qid 0 00:25:03.106 [2024-11-20 15:34:51.863713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.106 [2024-11-20 15:34:51.863720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.106 [2024-11-20 15:34:51.863723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c580) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141a690) 00:25:03.106 [2024-11-20 15:34:51.863749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.106 [2024-11-20 15:34:51.863762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c580, cid 3, qid 0 00:25:03.106 [2024-11-20 15:34:51.863963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.106 [2024-11-20 15:34:51.863970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.106 [2024-11-20 15:34:51.863973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.863977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c580) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.863982] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:03.106 [2024-11-20 15:34:51.863987] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:03.106 [2024-11-20 15:34:51.863996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.864000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.864004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141a690) 00:25:03.106 [2024-11-20 15:34:51.864010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.106 [2024-11-20 15:34:51.864021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c580, cid 3, qid 0 00:25:03.106 [2024-11-20 15:34:51.864193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.106 [2024-11-20 15:34:51.864200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.106 [2024-11-20 15:34:51.864204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.864208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c580) on tqpair=0x141a690 00:25:03.106 [2024-11-20 15:34:51.864218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.864222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.106 [2024-11-20 15:34:51.864230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141a690) 00:25:03.106 [2024-11-20 15:34:51.864237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.106 [2024-11-20 15:34:51.864247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c580, cid 3, qid 0 00:25:03.106 [2024-11-20 15:34:51.868167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.107 [2024-11-20 15:34:51.868176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.107 [2024-11-20 15:34:51.868179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.107 [2024-11-20 15:34:51.868183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c580) on tqpair=0x141a690 00:25:03.107 [2024-11-20 15:34:51.868194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:03.107 [2024-11-20 15:34:51.868198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:03.107 [2024-11-20 15:34:51.868202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141a690) 00:25:03.107 [2024-11-20 15:34:51.868209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.107 [2024-11-20 15:34:51.868221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x147c580, cid 3, qid 0 00:25:03.107 [2024-11-20 15:34:51.868407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:03.107 [2024-11-20 15:34:51.868413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:03.107 [2024-11-20 15:34:51.868416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:03.107 [2024-11-20 15:34:51.868420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x147c580) on tqpair=0x141a690 00:25:03.107 [2024-11-20 15:34:51.868428] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:03.107 0% 00:25:03.107 Data Units Read: 0 00:25:03.107 Data Units Written: 0 00:25:03.107 Host Read Commands: 0 00:25:03.107 Host Write Commands: 0 00:25:03.107 Controller Busy Time: 0 minutes 00:25:03.107 Power Cycles: 0 00:25:03.107 Power On Hours: 0 hours 00:25:03.107 Unsafe Shutdowns: 0 00:25:03.107 Unrecoverable Media Errors: 0 00:25:03.107 Lifetime Error Log Entries: 0 00:25:03.107 Warning Temperature Time: 0 minutes 00:25:03.107 Critical Temperature Time: 0 minutes 00:25:03.107 00:25:03.107 Number of Queues 00:25:03.107 ================ 00:25:03.107 Number of I/O Submission Queues: 127 00:25:03.107 Number of I/O Completion Queues: 127 00:25:03.107 00:25:03.107 Active Namespaces 00:25:03.107 ================= 00:25:03.107 Namespace ID:1 00:25:03.107 Error Recovery Timeout: Unlimited 00:25:03.107 Command Set Identifier: NVM (00h) 00:25:03.107 Deallocate: Supported 00:25:03.107 Deallocated/Unwritten Error: Not Supported 00:25:03.107 Deallocated Read Value: Unknown 00:25:03.107 Deallocate in Write Zeroes: Not Supported 00:25:03.107 Deallocated Guard Field: 0xFFFF 00:25:03.107 Flush: Supported 00:25:03.107 Reservation: Supported 00:25:03.107 Namespace Sharing Capabilities: Multiple Controllers 00:25:03.107 Size (in LBAs): 131072 (0GiB) 00:25:03.107 Capacity (in LBAs): 131072 (0GiB) 00:25:03.107 Utilization (in LBAs): 131072 (0GiB) 00:25:03.107 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:03.107 EUI64: ABCDEF0123456789 00:25:03.107 UUID: da35657b-44b3-4c15-a4cd-bcffcdd00660 00:25:03.107 Thin Provisioning: Not Supported 00:25:03.107 Per-NS Atomic Units: Yes 00:25:03.107 Atomic Boundary Size (Normal): 0 00:25:03.107 Atomic Boundary Size (PFail): 0 00:25:03.107 Atomic Boundary Offset: 0 00:25:03.107 Maximum Single Source Range Length: 65535 00:25:03.107 Maximum Copy Length: 65535 00:25:03.107 Maximum Source Range Count: 1 00:25:03.107 NGUID/EUI64 Never Reused: No 00:25:03.107 Namespace Write Protected: No 00:25:03.107 Number of LBA Formats: 1 00:25:03.107 Current LBA Format: LBA Format #00 00:25:03.107 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:03.107 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.107 rmmod nvme_tcp 00:25:03.107 rmmod nvme_fabrics 00:25:03.107 rmmod nvme_keyring 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 697356 ']' 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 697356 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 697356 ']' 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 697356 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.107 15:34:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697356 00:25:03.107 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.107 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.107 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697356' 00:25:03.107 killing process with pid 697356 00:25:03.107 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 697356 00:25:03.107 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 697356 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.369 15:34:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.352 15:34:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.352 00:25:05.352 real 0m11.785s 00:25:05.352 user 0m8.954s 00:25:05.352 sys 0m6.200s 00:25:05.352 15:34:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.352 15:34:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 ************************************ 00:25:05.352 END TEST nvmf_identify 00:25:05.352 ************************************ 00:25:05.612 15:34:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:05.612 15:34:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.613 ************************************ 00:25:05.613 START TEST nvmf_perf 00:25:05.613 ************************************ 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:05.613 * Looking for test storage... 00:25:05.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.613 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.875 --rc genhtml_branch_coverage=1 00:25:05.875 --rc genhtml_function_coverage=1 00:25:05.875 --rc genhtml_legend=1 00:25:05.875 --rc geninfo_all_blocks=1 00:25:05.875 --rc geninfo_unexecuted_blocks=1 00:25:05.875 00:25:05.875 ' 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.875 --rc genhtml_branch_coverage=1 00:25:05.875 --rc genhtml_function_coverage=1 00:25:05.875 --rc genhtml_legend=1 00:25:05.875 --rc geninfo_all_blocks=1 00:25:05.875 --rc geninfo_unexecuted_blocks=1 00:25:05.875 00:25:05.875 ' 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.875 --rc genhtml_branch_coverage=1 00:25:05.875 --rc genhtml_function_coverage=1 00:25:05.875 --rc genhtml_legend=1 00:25:05.875 --rc geninfo_all_blocks=1 00:25:05.875 --rc geninfo_unexecuted_blocks=1 00:25:05.875 00:25:05.875 ' 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.875 --rc genhtml_branch_coverage=1 00:25:05.875 --rc genhtml_function_coverage=1 00:25:05.875 --rc genhtml_legend=1 00:25:05.875 --rc geninfo_all_blocks=1 00:25:05.875 --rc geninfo_unexecuted_blocks=1 00:25:05.875 00:25:05.875 ' 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.875 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.876 15:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:14.021 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:14.021 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:14.021 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:14.021 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.021 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.022 15:35:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:25:14.022 00:25:14.022 --- 10.0.0.2 ping statistics --- 00:25:14.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.022 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:25:14.022 00:25:14.022 --- 10.0.0.1 ping statistics --- 00:25:14.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.022 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=701893 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 701893 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 701893 ']' 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.022 15:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.022 [2024-11-20 15:35:02.270953] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:25:14.022 [2024-11-20 15:35:02.271027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.022 [2024-11-20 15:35:02.370327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.022 [2024-11-20 15:35:02.423388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.022 [2024-11-20 15:35:02.423439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.022 [2024-11-20 15:35:02.423448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.022 [2024-11-20 15:35:02.423455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.022 [2024-11-20 15:35:02.423462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.022 [2024-11-20 15:35:02.425689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.022 [2024-11-20 15:35:02.425853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.022 [2024-11-20 15:35:02.426010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.022 [2024-11-20 15:35:02.426010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:14.282 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:14.854 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:14.854 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:15.115 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:15.115 15:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:15.377 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:15.377 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:15.377 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:15.377 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:15.377 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:15.377 [2024-11-20 15:35:04.257067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.377 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.638 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:15.638 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.899 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:15.899 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:16.160 15:35:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.160 [2024-11-20 15:35:05.052813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.160 15:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:16.420 15:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:16.420 15:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:16.420 15:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:16.420 15:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:17.806 Initializing NVMe Controllers 00:25:17.806 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:17.806 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:17.806 Initialization complete. Launching workers. 00:25:17.806 ======================================================== 00:25:17.806 Latency(us) 00:25:17.806 Device Information : IOPS MiB/s Average min max 00:25:17.806 PCIE (0000:65:00.0) NSID 1 from core 0: 79539.00 310.70 401.79 13.22 5057.67 00:25:17.806 ======================================================== 00:25:17.806 Total : 79539.00 310.70 401.79 13.22 5057.67 00:25:17.806 00:25:17.806 15:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:19.189 Initializing NVMe Controllers 00:25:19.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:19.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:19.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:19.189 Initialization complete. Launching workers. 00:25:19.189 ======================================================== 00:25:19.189 Latency(us) 00:25:19.189 Device Information : IOPS MiB/s Average min max 00:25:19.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 10203.79 234.81 45932.57 00:25:19.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16501.65 7961.85 47899.53 00:25:19.189 ======================================================== 00:25:19.189 Total : 163.00 0.64 12560.66 234.81 47899.53 00:25:19.189 00:25:19.189 15:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:20.572 Initializing NVMe Controllers 00:25:20.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:20.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:20.572 Initialization complete. Launching workers. 00:25:20.572 ======================================================== 00:25:20.572 Latency(us) 00:25:20.572 Device Information : IOPS MiB/s Average min max 00:25:20.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11816.76 46.16 2709.57 489.29 6171.18 00:25:20.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3805.92 14.87 8461.68 5400.44 17981.46 00:25:20.572 ======================================================== 00:25:20.572 Total : 15622.69 61.03 4110.87 489.29 17981.46 00:25:20.572 00:25:20.572 15:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:20.572 15:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:20.572 15:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:23.113 Initializing NVMe Controllers 00:25:23.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:23.113 Controller IO queue size 128, less than required. 00:25:23.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.113 Controller IO queue size 128, less than required. 00:25:23.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:23.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:23.113 Initialization complete. Launching workers. 00:25:23.113 ======================================================== 00:25:23.113 Latency(us) 00:25:23.113 Device Information : IOPS MiB/s Average min max 00:25:23.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1863.79 465.95 69784.63 38386.45 113246.75 00:25:23.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.46 150.62 219565.87 94707.05 331137.65 00:25:23.113 ======================================================== 00:25:23.113 Total : 2466.25 616.56 106373.57 38386.45 331137.65 00:25:23.113 00:25:23.113 15:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:23.113 No valid NVMe controllers or AIO or URING devices found 00:25:23.113 Initializing NVMe Controllers 00:25:23.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:23.113 Controller IO queue size 128, less than required. 00:25:23.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.113 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:23.113 Controller IO queue size 128, less than required. 00:25:23.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:23.113 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:23.113 WARNING: Some requested NVMe devices were skipped 00:25:23.113 15:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:25.656 Initializing NVMe Controllers 00:25:25.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.656 Controller IO queue size 128, less than required. 00:25:25.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:25.656 Controller IO queue size 128, less than required. 00:25:25.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:25.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.656 Initialization complete. Launching workers. 00:25:25.656 00:25:25.656 ==================== 00:25:25.656 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:25.656 TCP transport: 00:25:25.656 polls: 41947 00:25:25.656 idle_polls: 27967 00:25:25.656 sock_completions: 13980 00:25:25.656 nvme_completions: 7523 00:25:25.656 submitted_requests: 11360 00:25:25.656 queued_requests: 1 00:25:25.656 00:25:25.656 ==================== 00:25:25.656 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:25.656 TCP transport: 00:25:25.656 polls: 40262 00:25:25.656 idle_polls: 25001 00:25:25.656 sock_completions: 15261 00:25:25.656 nvme_completions: 6973 00:25:25.656 submitted_requests: 10456 00:25:25.656 queued_requests: 1 00:25:25.656 ======================================================== 00:25:25.656 Latency(us) 00:25:25.656 Device Information : IOPS MiB/s Average min max 00:25:25.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1880.43 470.11 69441.15 32437.37 126985.96 00:25:25.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1742.93 435.73 74181.98 30515.40 118006.39 00:25:25.656 ======================================================== 00:25:25.656 Total : 3623.36 905.84 71721.61 30515.40 126985.96 00:25:25.656 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.656 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.916 rmmod nvme_tcp 00:25:25.916 rmmod nvme_fabrics 00:25:25.916 rmmod nvme_keyring 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 701893 ']' 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 701893 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 701893 ']' 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 701893 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701893 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701893' 00:25:25.916 killing process with pid 701893 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 701893 00:25:25.916 15:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 701893 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.828 15:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.374 00:25:30.374 real 0m24.408s 00:25:30.374 user 0m58.746s 00:25:30.374 sys 0m8.701s 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:30.374 ************************************ 00:25:30.374 END TEST nvmf_perf 00:25:30.374 ************************************ 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.374 ************************************ 00:25:30.374 START TEST nvmf_fio_host 00:25:30.374 ************************************ 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:30.374 * Looking for test storage... 00:25:30.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:30.374 15:35:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.374 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.375 --rc genhtml_branch_coverage=1 00:25:30.375 --rc genhtml_function_coverage=1 00:25:30.375 --rc genhtml_legend=1 00:25:30.375 --rc geninfo_all_blocks=1 00:25:30.375 --rc geninfo_unexecuted_blocks=1 00:25:30.375 00:25:30.375 ' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.375 --rc genhtml_branch_coverage=1 00:25:30.375 --rc genhtml_function_coverage=1 00:25:30.375 --rc genhtml_legend=1 00:25:30.375 --rc geninfo_all_blocks=1 00:25:30.375 --rc geninfo_unexecuted_blocks=1 00:25:30.375 00:25:30.375 ' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.375 --rc genhtml_branch_coverage=1 00:25:30.375 --rc genhtml_function_coverage=1 00:25:30.375 --rc genhtml_legend=1 00:25:30.375 --rc geninfo_all_blocks=1 00:25:30.375 --rc geninfo_unexecuted_blocks=1 00:25:30.375 00:25:30.375 ' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.375 --rc genhtml_branch_coverage=1 00:25:30.375 --rc genhtml_function_coverage=1 00:25:30.375 --rc genhtml_legend=1 00:25:30.375 --rc geninfo_all_blocks=1 00:25:30.375 --rc geninfo_unexecuted_blocks=1 00:25:30.375 00:25:30.375 ' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.375 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.376 15:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:38.511 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:38.511 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.511 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:38.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:38.512 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:25:38.512 00:25:38.512 --- 10.0.0.2 ping statistics --- 00:25:38.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.512 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:25:38.512 00:25:38.512 --- 10.0.0.1 ping statistics --- 00:25:38.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.512 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=708785 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 708785 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 708785 ']' 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.512 15:35:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.512 [2024-11-20 15:35:26.670296] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:25:38.512 [2024-11-20 15:35:26.670364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.512 [2024-11-20 15:35:26.769872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.512 [2024-11-20 15:35:26.823712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.512 [2024-11-20 15:35:26.823767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.512 [2024-11-20 15:35:26.823776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.512 [2024-11-20 15:35:26.823783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.512 [2024-11-20 15:35:26.823789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.512 [2024-11-20 15:35:26.826066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.512 [2024-11-20 15:35:26.826226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.512 [2024-11-20 15:35:26.826299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.512 [2024-11-20 15:35:26.826300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.773 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.773 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:38.773 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:38.773 [2024-11-20 15:35:27.659187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.773 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:38.773 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.773 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.033 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:39.033 Malloc1 00:25:39.033 15:35:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:39.294 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:39.554 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.554 [2024-11-20 15:35:28.508058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:39.815 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:40.096 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:40.096 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:40.096 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:40.096 15:35:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:40.359 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:40.359 fio-3.35 00:25:40.359 Starting 1 thread 00:25:42.899 00:25:42.900 test: (groupid=0, jobs=1): err= 0: pid=709633: Wed Nov 20 15:35:31 2024 00:25:42.900 read: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:25:42.900 slat (usec): min=2, max=278, avg= 2.14, stdev= 2.41 00:25:42.900 clat (usec): min=3241, max=8842, avg=5113.35, stdev=367.48 00:25:42.900 lat (usec): min=3243, max=8844, avg=5115.50, stdev=367.55 00:25:42.900 clat percentiles (usec): 00:25:42.900 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:25:42.900 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:42.900 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:25:42.900 | 99.00th=[ 5997], 99.50th=[ 6456], 99.90th=[ 7242], 99.95th=[ 8160], 00:25:42.900 | 99.99th=[ 8717] 00:25:42.900 bw ( KiB/s): min=53496, max=55592, per=99.97%, avg=55000.00, stdev=1005.05, samples=4 00:25:42.900 iops : min=13374, max=13898, avg=13750.00, stdev=251.26, samples=4 00:25:42.900 write: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec); 0 zone resets 00:25:42.900 slat (usec): min=2, max=271, avg= 2.21, stdev= 1.86 00:25:42.900 clat (usec): min=2700, max=7861, avg=4130.87, stdev=303.84 00:25:42.900 lat (usec): min=2702, max=7863, avg=4133.08, stdev=303.96 00:25:42.900 clat percentiles (usec): 00:25:42.900 | 1.00th=[ 3458], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:25:42.900 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:25:42.900 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:25:42.900 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 6063], 99.95th=[ 7177], 00:25:42.900 | 99.99th=[ 7832] 00:25:42.900 bw ( KiB/s): min=54008, max=55488, per=99.96%, avg=54924.00, stdev=639.35, samples=4 00:25:42.900 iops : min=13502, max=13872, avg=13731.00, stdev=159.84, samples=4 00:25:42.900 lat (msec) : 4=15.86%, 10=84.14% 00:25:42.900 cpu : usr=74.04%, sys=24.76%, ctx=12, majf=0, minf=17 00:25:42.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:42.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:42.900 issued rwts: total=27562,27527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:42.900 00:25:42.900 Run status group 0 (all jobs): 00:25:42.900 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:25:42.900 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:42.900 15:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:42.900 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:42.900 fio-3.35 00:25:42.900 Starting 1 thread 00:25:45.442 00:25:45.442 test: (groupid=0, jobs=1): err= 0: pid=710149: Wed Nov 20 15:35:34 2024 00:25:45.442 read: IOPS=9459, BW=148MiB/s (155MB/s)(296MiB/2003msec) 00:25:45.442 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.55 00:25:45.442 clat (usec): min=2273, max=15428, avg=8249.78, stdev=1895.42 00:25:45.442 lat (usec): min=2277, max=15431, avg=8253.39, stdev=1895.51 00:25:45.442 clat percentiles (usec): 00:25:45.442 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6456], 00:25:45.442 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:25:45.442 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:25:45.442 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14222], 99.95th=[14615], 00:25:45.442 | 99.99th=[15008] 00:25:45.442 bw ( KiB/s): min=64896, max=82618, per=49.34%, avg=74670.50, stdev=7375.10, samples=4 00:25:45.442 iops : min= 4056, max= 5163, avg=4666.75, stdev=460.72, samples=4 00:25:45.442 write: IOPS=5625, BW=87.9MiB/s (92.2MB/s)(153MiB/1745msec); 0 zone resets 00:25:45.442 slat (usec): min=39, max=344, avg=40.80, stdev= 6.30 00:25:45.442 clat (usec): min=2156, max=15422, avg=9124.17, stdev=1322.14 00:25:45.442 lat (usec): min=2196, max=15461, avg=9164.98, stdev=1323.18 00:25:45.442 clat percentiles (usec): 00:25:45.442 | 1.00th=[ 5997], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8029], 00:25:45.442 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:25:45.442 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:25:45.442 | 99.00th=[12649], 99.50th=[13304], 99.90th=[14091], 99.95th=[14353], 00:25:45.442 | 99.99th=[15401] 00:25:45.442 bw ( KiB/s): min=67424, max=85940, per=86.58%, avg=77925.00, stdev=7721.86, samples=4 00:25:45.442 iops : min= 4214, max= 5371, avg=4870.25, stdev=482.53, samples=4 00:25:45.442 lat (msec) : 4=0.50%, 10=78.46%, 20=21.04% 00:25:45.442 cpu : usr=85.81%, sys=12.69%, ctx=19, majf=0, minf=33 00:25:45.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:45.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.442 issued rwts: total=18947,9816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.442 00:25:45.442 Run status group 0 (all jobs): 00:25:45.442 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=296MiB (310MB), run=2003-2003msec 00:25:45.442 WRITE: bw=87.9MiB/s (92.2MB/s), 87.9MiB/s-87.9MiB/s (92.2MB/s-92.2MB/s), io=153MiB (161MB), run=1745-1745msec 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.442 rmmod nvme_tcp 00:25:45.442 rmmod nvme_fabrics 00:25:45.442 rmmod nvme_keyring 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 708785 ']' 00:25:45.442 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 708785 00:25:45.443 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 708785 ']' 00:25:45.443 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 708785 00:25:45.443 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:45.702 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.702 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708785 00:25:45.702 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708785' 00:25:45.703 killing process with pid 708785 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 708785 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 708785 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.703 15:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.242 00:25:48.242 real 0m17.794s 00:25:48.242 user 1m10.601s 00:25:48.242 sys 0m7.567s 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.242 ************************************ 00:25:48.242 END TEST nvmf_fio_host 00:25:48.242 ************************************ 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.242 ************************************ 00:25:48.242 START TEST nvmf_failover 00:25:48.242 ************************************ 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:48.242 * Looking for test storage... 00:25:48.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:48.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.242 --rc genhtml_branch_coverage=1 00:25:48.242 --rc genhtml_function_coverage=1 00:25:48.242 --rc genhtml_legend=1 00:25:48.242 --rc geninfo_all_blocks=1 00:25:48.242 --rc geninfo_unexecuted_blocks=1 00:25:48.242 00:25:48.242 ' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:48.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.242 --rc genhtml_branch_coverage=1 00:25:48.242 --rc genhtml_function_coverage=1 00:25:48.242 --rc genhtml_legend=1 00:25:48.242 --rc geninfo_all_blocks=1 00:25:48.242 --rc geninfo_unexecuted_blocks=1 00:25:48.242 00:25:48.242 ' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:48.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.242 --rc genhtml_branch_coverage=1 00:25:48.242 --rc genhtml_function_coverage=1 00:25:48.242 --rc genhtml_legend=1 00:25:48.242 --rc geninfo_all_blocks=1 00:25:48.242 --rc geninfo_unexecuted_blocks=1 00:25:48.242 00:25:48.242 ' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:48.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.242 --rc genhtml_branch_coverage=1 00:25:48.242 --rc genhtml_function_coverage=1 00:25:48.242 --rc genhtml_legend=1 00:25:48.242 --rc geninfo_all_blocks=1 00:25:48.242 --rc geninfo_unexecuted_blocks=1 00:25:48.242 00:25:48.242 ' 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.242 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.243 15:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:56.380 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:56.381 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:56.381 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:56.381 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:56.381 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:56.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:25:56.381 00:25:56.381 --- 10.0.0.2 ping statistics --- 00:25:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.381 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:25:56.381 00:25:56.381 --- 10.0.0.1 ping statistics --- 00:25:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.381 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=714802 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 714802 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 714802 ']' 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.381 15:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.381 [2024-11-20 15:35:44.567002] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:25:56.381 [2024-11-20 15:35:44.567068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.381 [2024-11-20 15:35:44.668487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:56.381 [2024-11-20 15:35:44.720605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.381 [2024-11-20 15:35:44.720656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.381 [2024-11-20 15:35:44.720664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.381 [2024-11-20 15:35:44.720671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.381 [2024-11-20 15:35:44.720678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.381 [2024-11-20 15:35:44.722508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.381 [2024-11-20 15:35:44.722674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.381 [2024-11-20 15:35:44.722675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.641 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.642 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:56.642 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.642 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.642 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.642 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.642 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:56.642 [2024-11-20 15:35:45.582428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.901 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:56.901 Malloc0 00:25:56.901 15:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.161 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.422 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.682 [2024-11-20 15:35:46.389408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.682 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.682 [2024-11-20 15:35:46.593952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.682 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:57.942 [2024-11-20 15:35:46.794559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=715354 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 715354 /var/tmp/bdevperf.sock 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 715354 ']' 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.942 15:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:58.883 15:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.883 15:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:58.883 15:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:59.143 NVMe0n1 00:25:59.143 15:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:59.403 00:25:59.403 15:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:59.403 15:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=715585 00:25:59.403 15:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:00.800 15:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.800 [2024-11-20 15:35:49.495514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 [2024-11-20 15:35:49.495695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174ed0 is same with the state(6) to be set 00:26:00.800 15:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:04.101 15:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:04.101 00:26:04.101 15:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:04.362 [2024-11-20 15:35:53.077216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.362 [2024-11-20 15:35:53.077292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 [2024-11-20 15:35:53.077580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175cf0 is same with the state(6) to be set 00:26:04.363 15:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:07.664 15:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.664 [2024-11-20 15:35:56.266482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.664 15:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:08.607 15:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:08.607 [2024-11-20 15:35:57.453972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.607 [2024-11-20 15:35:57.454135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 [2024-11-20 15:35:57.454264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176bf0 is same with the state(6) to be set 00:26:08.608 15:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 715585 00:26:15.343 { 00:26:15.343 "results": [ 00:26:15.343 { 00:26:15.343 "job": "NVMe0n1", 00:26:15.343 "core_mask": "0x1", 00:26:15.343 "workload": "verify", 00:26:15.343 "status": "finished", 00:26:15.343 "verify_range": { 00:26:15.343 "start": 0, 00:26:15.343 "length": 16384 00:26:15.343 }, 00:26:15.343 "queue_depth": 128, 00:26:15.343 "io_size": 4096, 00:26:15.343 "runtime": 15.006052, 00:26:15.343 "iops": 12449.377091322887, 00:26:15.343 "mibps": 48.63037926298003, 00:26:15.343 "io_failed": 3893, 00:26:15.343 "io_timeout": 0, 00:26:15.343 "avg_latency_us": 10050.19383122978, 00:26:15.343 "min_latency_us": 532.48, 00:26:15.343 "max_latency_us": 30583.466666666667 00:26:15.343 } 00:26:15.343 ], 00:26:15.343 "core_count": 1 00:26:15.343 } 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 715354 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 715354 ']' 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 715354 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715354 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715354' 00:26:15.343 killing process with pid 715354 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 715354 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 715354 00:26:15.343 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:15.343 [2024-11-20 15:35:46.876043] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:26:15.343 [2024-11-20 15:35:46.876103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715354 ] 00:26:15.343 [2024-11-20 15:35:46.966672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.343 [2024-11-20 15:35:47.002563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.343 Running I/O for 15 seconds... 00:26:15.343 10378.00 IOPS, 40.54 MiB/s [2024-11-20T14:36:04.303Z] [2024-11-20 15:35:49.497247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.343 [2024-11-20 15:35:49.497675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.343 [2024-11-20 15:35:49.497684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.497990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.497999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.344 [2024-11-20 15:35:49.498361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.344 [2024-11-20 15:35:49.498369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.345 [2024-11-20 15:35:49.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.345 [2024-11-20 15:35:49.498973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.498995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.345 [2024-11-20 15:35:49.499003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92360 len:8 PRP1 0x0 PRP2 0x0 00:26:15.345 [2024-11-20 15:35:49.499010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.499047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.345 [2024-11-20 15:35:49.499058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.499066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.345 [2024-11-20 15:35:49.499073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.345 [2024-11-20 15:35:49.499082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.346 [2024-11-20 15:35:49.499090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.346 [2024-11-20 15:35:49.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585d70 is same with the state(6) to be set 00:26:15.346 [2024-11-20 15:35:49.499337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92368 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92376 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91424 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91432 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91440 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91448 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91456 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91464 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91472 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91480 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91488 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91496 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91504 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91512 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91520 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91528 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91536 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91544 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92384 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91552 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91560 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.346 [2024-11-20 15:35:49.499907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.346 [2024-11-20 15:35:49.499913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.346 [2024-11-20 15:35:49.499919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91568 len:8 PRP1 0x0 PRP2 0x0 00:26:15.346 [2024-11-20 15:35:49.499926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.499934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.499939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.499945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91576 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.499952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.499960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.499965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.499971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91584 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.499978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.499986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.499992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.499998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91592 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.500005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91600 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91608 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91616 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91624 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91632 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91640 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91656 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91664 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91672 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91680 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91688 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91696 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.510975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.510982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.510988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.510994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.511001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.511008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.511014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.511020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91712 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.511027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.511035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.511040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.347 [2024-11-20 15:35:49.511046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91720 len:8 PRP1 0x0 PRP2 0x0 00:26:15.347 [2024-11-20 15:35:49.511053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.347 [2024-11-20 15:35:49.511061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.347 [2024-11-20 15:35:49.511067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91728 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91736 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91744 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91752 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91760 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91768 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91776 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91784 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91792 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91800 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91808 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91816 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91824 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91832 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91840 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91848 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91856 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91864 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91872 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91880 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91888 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91896 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91904 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.348 [2024-11-20 15:35:49.511677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.348 [2024-11-20 15:35:49.511683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.348 [2024-11-20 15:35:49.511689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91912 len:8 PRP1 0x0 PRP2 0x0 00:26:15.348 [2024-11-20 15:35:49.511696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91920 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91928 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91936 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91944 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91952 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91960 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91968 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91976 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91984 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91992 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.511974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.511980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92000 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.511987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.511995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92008 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92016 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92024 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92032 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92040 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92048 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92056 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92064 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92072 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92080 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92088 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.349 [2024-11-20 15:35:49.512304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92096 len:8 PRP1 0x0 PRP2 0x0 00:26:15.349 [2024-11-20 15:35:49.512311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.349 [2024-11-20 15:35:49.512319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.349 [2024-11-20 15:35:49.512324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.512330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92104 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92112 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92120 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92128 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92136 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92144 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92152 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92160 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92176 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92184 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92192 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92200 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91368 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91376 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91384 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91392 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91400 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91408 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91416 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:26:15.350 [2024-11-20 15:35:49.520829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.350 [2024-11-20 15:35:49.520839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.350 [2024-11-20 15:35:49.520846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.350 [2024-11-20 15:35:49.520853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.520864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.520874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.520881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.520898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.520907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.520914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.520922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92248 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.520931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.520941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.520947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.520955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.520964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.520973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.520981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.520988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92264 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92272 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92280 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92288 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92296 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92304 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92312 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92320 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92328 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92336 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92344 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92352 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.351 [2024-11-20 15:35:49.521390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.351 [2024-11-20 15:35:49.521398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92360 len:8 PRP1 0x0 PRP2 0x0 00:26:15.351 [2024-11-20 15:35:49.521407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:49.521455] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:15.351 [2024-11-20 15:35:49.521468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:15.351 [2024-11-20 15:35:49.521540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585d70 (9): Bad file descriptor 00:26:15.351 [2024-11-20 15:35:49.526032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:15.351 [2024-11-20 15:35:49.552032] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:15.351 10684.50 IOPS, 41.74 MiB/s [2024-11-20T14:36:04.311Z] 10999.00 IOPS, 42.96 MiB/s [2024-11-20T14:36:04.311Z] 11260.25 IOPS, 43.99 MiB/s [2024-11-20T14:36:04.311Z] [2024-11-20 15:35:53.078360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.351 [2024-11-20 15:35:53.078487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.351 [2024-11-20 15:35:53.078492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.352 [2024-11-20 15:35:53.078903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.352 [2024-11-20 15:35:53.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.078989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.078996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.353 [2024-11-20 15:35:53.079256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.353 [2024-11-20 15:35:53.079370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.353 [2024-11-20 15:35:53.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.354 [2024-11-20 15:35:53.079805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.354 [2024-11-20 15:35:53.079827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44472 len:8 PRP1 0x0 PRP2 0x0 00:26:15.354 [2024-11-20 15:35:53.079833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.354 [2024-11-20 15:35:53.079842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.354 [2024-11-20 15:35:53.079846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.355 [2024-11-20 15:35:53.079850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44480 len:8 PRP1 0x0 PRP2 0x0 00:26:15.355 [2024-11-20 15:35:53.079855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.079860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.355 [2024-11-20 15:35:53.079864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.355 [2024-11-20 15:35:53.079868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:26:15.355 [2024-11-20 15:35:53.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.079878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.355 [2024-11-20 15:35:53.079883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.355 [2024-11-20 15:35:53.079887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:26:15.355 [2024-11-20 15:35:53.079891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.079898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.355 [2024-11-20 15:35:53.079902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.355 [2024-11-20 15:35:53.079906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:26:15.355 [2024-11-20 15:35:53.079911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.079916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.355 [2024-11-20 15:35:53.079920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.355 [2024-11-20 15:35:53.079924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:26:15.355 [2024-11-20 15:35:53.079929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.079935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.355 [2024-11-20 15:35:53.079938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.355 [2024-11-20 15:35:53.079943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:26:15.355 [2024-11-20 15:35:53.079948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.079981] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:15.355 [2024-11-20 15:35:53.079998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.355 [2024-11-20 15:35:53.080004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.080010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.355 [2024-11-20 15:35:53.080015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.080020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.355 [2024-11-20 15:35:53.080027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.080033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.355 [2024-11-20 15:35:53.080038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:53.080043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:15.355 [2024-11-20 15:35:53.092427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585d70 (9): Bad file descriptor 00:26:15.355 [2024-11-20 15:35:53.095770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:15.355 [2024-11-20 15:35:53.121826] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:15.355 11475.80 IOPS, 44.83 MiB/s [2024-11-20T14:36:04.315Z] 11705.83 IOPS, 45.73 MiB/s [2024-11-20T14:36:04.315Z] 11892.86 IOPS, 46.46 MiB/s [2024-11-20T14:36:04.315Z] 12025.88 IOPS, 46.98 MiB/s [2024-11-20T14:36:04.315Z] 12136.33 IOPS, 47.41 MiB/s [2024-11-20T14:36:04.315Z] [2024-11-20 15:35:57.455087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.355 [2024-11-20 15:35:57.455390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.355 [2024-11-20 15:35:57.455395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.356 [2024-11-20 15:35:57.455407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.356 [2024-11-20 15:35:57.455418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.356 [2024-11-20 15:35:57.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.356 [2024-11-20 15:35:57.455441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.356 [2024-11-20 15:35:57.455453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.356 [2024-11-20 15:35:57.455464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.356 [2024-11-20 15:35:57.455800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.356 [2024-11-20 15:35:57.455807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.455989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.455995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.357 [2024-11-20 15:35:57.456263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.357 [2024-11-20 15:35:57.456268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.358 [2024-11-20 15:35:57.456532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.358 [2024-11-20 15:35:57.456543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.358 [2024-11-20 15:35:57.456556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.358 [2024-11-20 15:35:57.456567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.358 [2024-11-20 15:35:57.456578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.358 [2024-11-20 15:35:57.456590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.358 [2024-11-20 15:35:57.456602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:15.358 [2024-11-20 15:35:57.456623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:15.358 [2024-11-20 15:35:57.456627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110704 len:8 PRP1 0x0 PRP2 0x0 00:26:15.358 [2024-11-20 15:35:57.456633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456671] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:15.358 [2024-11-20 15:35:57.456688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.358 [2024-11-20 15:35:57.456694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.358 [2024-11-20 15:35:57.456705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.358 [2024-11-20 15:35:57.456717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.358 [2024-11-20 15:35:57.456727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.358 [2024-11-20 15:35:57.456733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:15.358 [2024-11-20 15:35:57.459196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:15.358 [2024-11-20 15:35:57.459219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585d70 (9): Bad file descriptor 00:26:15.358 [2024-11-20 15:35:57.481627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:15.358 12191.50 IOPS, 47.62 MiB/s [2024-11-20T14:36:04.318Z] 12259.91 IOPS, 47.89 MiB/s [2024-11-20T14:36:04.318Z] 12299.00 IOPS, 48.04 MiB/s [2024-11-20T14:36:04.318Z] 12359.69 IOPS, 48.28 MiB/s [2024-11-20T14:36:04.318Z] 12403.43 IOPS, 48.45 MiB/s 00:26:15.358 Latency(us) 00:26:15.358 [2024-11-20T14:36:04.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.358 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:15.358 Verification LBA range: start 0x0 length 0x4000 00:26:15.358 NVMe0n1 : 15.01 12449.38 48.63 259.43 0.00 10050.19 532.48 30583.47 00:26:15.358 [2024-11-20T14:36:04.318Z] =================================================================================================================== 00:26:15.359 [2024-11-20T14:36:04.319Z] Total : 12449.38 48.63 259.43 0.00 10050.19 532.48 30583.47 00:26:15.359 Received shutdown signal, test time was about 15.000000 seconds 00:26:15.359 00:26:15.359 Latency(us) 00:26:15.359 [2024-11-20T14:36:04.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.359 [2024-11-20T14:36:04.319Z] =================================================================================================================== 00:26:15.359 [2024-11-20T14:36:04.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=718627 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 718627 /var/tmp/bdevperf.sock 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 718627 ']' 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.359 15:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.620 15:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.620 15:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:15.620 15:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:15.880 [2024-11-20 15:36:04.657008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:15.880 15:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:16.141 [2024-11-20 15:36:04.841446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:16.141 15:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:16.403 NVMe0n1 00:26:16.403 15:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:16.975 00:26:16.975 15:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:17.235 00:26:17.235 15:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:17.235 15:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:17.494 15:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:17.494 15:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:20.789 15:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:20.789 15:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:20.789 15:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:20.789 15:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=720114 00:26:20.789 15:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 720114 00:26:22.170 { 00:26:22.170 "results": [ 00:26:22.170 { 00:26:22.170 "job": "NVMe0n1", 00:26:22.170 "core_mask": "0x1", 00:26:22.170 "workload": "verify", 00:26:22.170 "status": "finished", 00:26:22.170 "verify_range": { 00:26:22.170 "start": 0, 00:26:22.170 "length": 16384 00:26:22.170 }, 00:26:22.170 "queue_depth": 128, 00:26:22.170 "io_size": 4096, 00:26:22.170 "runtime": 1.010939, 00:26:22.170 "iops": 12826.688850662602, 00:26:22.170 "mibps": 50.10425332290079, 00:26:22.170 "io_failed": 0, 00:26:22.170 "io_timeout": 0, 00:26:22.170 "avg_latency_us": 9938.912254183699, 00:26:22.170 "min_latency_us": 2007.04, 00:26:22.170 "max_latency_us": 13926.4 00:26:22.170 } 00:26:22.170 ], 00:26:22.170 "core_count": 1 00:26:22.170 } 00:26:22.170 15:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:22.170 [2024-11-20 15:36:03.704307] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:26:22.170 [2024-11-20 15:36:03.704367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718627 ] 00:26:22.170 [2024-11-20 15:36:03.787707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.170 [2024-11-20 15:36:03.816566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.170 [2024-11-20 15:36:06.402850] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:22.170 [2024-11-20 15:36:06.402887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.170 [2024-11-20 15:36:06.402896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.170 [2024-11-20 15:36:06.402903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.170 [2024-11-20 15:36:06.402909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.170 [2024-11-20 15:36:06.402914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.170 [2024-11-20 15:36:06.402920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.170 [2024-11-20 15:36:06.402925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.170 [2024-11-20 15:36:06.402930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.170 [2024-11-20 15:36:06.402936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:22.170 [2024-11-20 15:36:06.402955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:22.170 [2024-11-20 15:36:06.402966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12d70 (9): Bad file descriptor 00:26:22.170 [2024-11-20 15:36:06.408178] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:22.170 Running I/O for 1 seconds... 00:26:22.170 12778.00 IOPS, 49.91 MiB/s 00:26:22.170 Latency(us) 00:26:22.170 [2024-11-20T14:36:11.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.170 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:22.170 Verification LBA range: start 0x0 length 0x4000 00:26:22.170 NVMe0n1 : 1.01 12826.69 50.10 0.00 0.00 9938.91 2007.04 13926.40 00:26:22.170 [2024-11-20T14:36:11.130Z] =================================================================================================================== 00:26:22.170 [2024-11-20T14:36:11.130Z] Total : 12826.69 50.10 0.00 0.00 9938.91 2007.04 13926.40 00:26:22.170 15:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:22.170 15:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:22.170 15:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.170 15:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:22.170 15:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:22.431 15:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.691 15:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 718627 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 718627 ']' 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 718627 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718627 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718627' 00:26:25.991 killing process with pid 718627 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 718627 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 718627 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:25.991 15:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.252 rmmod nvme_tcp 00:26:26.252 rmmod nvme_fabrics 00:26:26.252 rmmod nvme_keyring 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 714802 ']' 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 714802 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 714802 ']' 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 714802 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714802 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714802' 00:26:26.252 killing process with pid 714802 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 714802 00:26:26.252 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 714802 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.513 15:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.427 15:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:28.427 00:26:28.427 real 0m40.640s 00:26:28.427 user 2m5.053s 00:26:28.427 sys 0m8.801s 00:26:28.427 15:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.427 15:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.427 ************************************ 00:26:28.427 END TEST nvmf_failover 00:26:28.427 ************************************ 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.687 ************************************ 00:26:28.687 START TEST nvmf_host_discovery 00:26:28.687 ************************************ 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:28.687 * Looking for test storage... 00:26:28.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.687 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.947 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.947 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.947 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.947 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:28.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.948 --rc genhtml_branch_coverage=1 00:26:28.948 --rc genhtml_function_coverage=1 00:26:28.948 --rc genhtml_legend=1 00:26:28.948 --rc geninfo_all_blocks=1 00:26:28.948 --rc geninfo_unexecuted_blocks=1 00:26:28.948 00:26:28.948 ' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:28.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.948 --rc genhtml_branch_coverage=1 00:26:28.948 --rc genhtml_function_coverage=1 00:26:28.948 --rc genhtml_legend=1 00:26:28.948 --rc geninfo_all_blocks=1 00:26:28.948 --rc geninfo_unexecuted_blocks=1 00:26:28.948 00:26:28.948 ' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:28.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.948 --rc genhtml_branch_coverage=1 00:26:28.948 --rc genhtml_function_coverage=1 00:26:28.948 --rc genhtml_legend=1 00:26:28.948 --rc geninfo_all_blocks=1 00:26:28.948 --rc geninfo_unexecuted_blocks=1 00:26:28.948 00:26:28.948 ' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:28.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.948 --rc genhtml_branch_coverage=1 00:26:28.948 --rc genhtml_function_coverage=1 00:26:28.948 --rc genhtml_legend=1 00:26:28.948 --rc geninfo_all_blocks=1 00:26:28.948 --rc geninfo_unexecuted_blocks=1 00:26:28.948 00:26:28.948 ' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.948 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.949 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.949 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.949 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:28.949 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:28.949 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.949 15:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:37.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:37.087 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.087 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:37.088 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:37.088 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.088 15:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:37.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:26:37.088 00:26:37.088 --- 10.0.0.2 ping statistics --- 00:26:37.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.088 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:26:37.088 00:26:37.088 --- 10.0.0.1 ping statistics --- 00:26:37.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.088 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=725468 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 725468 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 725468 ']' 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.088 15:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.088 [2024-11-20 15:36:25.269925] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:26:37.088 [2024-11-20 15:36:25.269993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.088 [2024-11-20 15:36:25.368723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.088 [2024-11-20 15:36:25.419561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.088 [2024-11-20 15:36:25.419611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.088 [2024-11-20 15:36:25.419620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.088 [2024-11-20 15:36:25.419627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.088 [2024-11-20 15:36:25.419633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.088 [2024-11-20 15:36:25.420438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.349 [2024-11-20 15:36:26.132925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.349 [2024-11-20 15:36:26.145195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.349 null0 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.349 null1 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=725790 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 725790 /tmp/host.sock 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 725790 ']' 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.349 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:37.349 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:37.350 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.350 15:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.350 [2024-11-20 15:36:26.240954] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:26:37.350 [2024-11-20 15:36:26.241015] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725790 ] 00:26:37.610 [2024-11-20 15:36:26.332765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.610 [2024-11-20 15:36:26.385467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.180 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.441 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.442 [2024-11-20 15:36:27.384413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.442 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:38.703 15:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:39.275 [2024-11-20 15:36:28.134382] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:39.275 [2024-11-20 15:36:28.134414] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:39.275 [2024-11-20 15:36:28.134430] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.275 [2024-11-20 15:36:28.221686] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:39.536 [2024-11-20 15:36:28.320642] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:39.536 [2024-11-20 15:36:28.321998] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ef47a0:1 started. 00:26:39.536 [2024-11-20 15:36:28.323878] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:39.536 [2024-11-20 15:36:28.323907] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:39.536 [2024-11-20 15:36:28.330879] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ef47a0 was disconnected and freed. delete nvme_qpair. 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:39.797 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.798 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.059 15:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.059 [2024-11-20 15:36:29.000208] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ec3120:1 started. 00:26:40.059 [2024-11-20 15:36:29.011885] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ec3120 was disconnected and freed. delete nvme_qpair. 00:26:40.320 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.321 [2024-11-20 15:36:29.088812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.321 [2024-11-20 15:36:29.089776] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:40.321 [2024-11-20 15:36:29.089796] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.321 [2024-11-20 15:36:29.176501] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.321 [2024-11-20 15:36:29.236261] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:40.321 [2024-11-20 15:36:29.236298] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:40.321 [2024-11-20 15:36:29.236307] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.321 [2024-11-20 15:36:29.236312] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:40.321 15:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.709 [2024-11-20 15:36:30.360866] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:41.709 [2024-11-20 15:36:30.360891] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.709 [2024-11-20 15:36:30.364791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.709 [2024-11-20 15:36:30.364810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.709 [2024-11-20 15:36:30.364821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.709 [2024-11-20 15:36:30.364829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.709 [2024-11-20 15:36:30.364837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.709 [2024-11-20 15:36:30.364845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.709 [2024-11-20 15:36:30.364853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.709 [2024-11-20 15:36:30.364860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.709 [2024-11-20 15:36:30.364868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:41.709 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:41.709 [2024-11-20 15:36:30.374803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.709 [2024-11-20 15:36:30.384837] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.709 [2024-11-20 15:36:30.384851] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.709 [2024-11-20 15:36:30.384856] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.709 [2024-11-20 15:36:30.384866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.709 [2024-11-20 15:36:30.384886] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.709 [2024-11-20 15:36:30.385440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.709 [2024-11-20 15:36:30.385479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.709 [2024-11-20 15:36:30.385490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.709 [2024-11-20 15:36:30.385510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.709 [2024-11-20 15:36:30.385547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.709 [2024-11-20 15:36:30.385556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.709 [2024-11-20 15:36:30.385566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.709 [2024-11-20 15:36:30.385574] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.710 [2024-11-20 15:36:30.385580] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.710 [2024-11-20 15:36:30.385586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.710 [2024-11-20 15:36:30.394917] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.710 [2024-11-20 15:36:30.394931] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.710 [2024-11-20 15:36:30.394935] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.394940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.710 [2024-11-20 15:36:30.394956] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.395394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.710 [2024-11-20 15:36:30.395431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.710 [2024-11-20 15:36:30.395448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.710 [2024-11-20 15:36:30.395469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.710 [2024-11-20 15:36:30.395482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.710 [2024-11-20 15:36:30.395491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.710 [2024-11-20 15:36:30.395501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.710 [2024-11-20 15:36:30.395508] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.710 [2024-11-20 15:36:30.395514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.710 [2024-11-20 15:36:30.395518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.710 [2024-11-20 15:36:30.404990] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.710 [2024-11-20 15:36:30.405006] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.710 [2024-11-20 15:36:30.405011] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.405016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.710 [2024-11-20 15:36:30.405032] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.405331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.710 [2024-11-20 15:36:30.405346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.710 [2024-11-20 15:36:30.405354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.710 [2024-11-20 15:36:30.405366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.710 [2024-11-20 15:36:30.405377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.710 [2024-11-20 15:36:30.405384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.710 [2024-11-20 15:36:30.405391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.710 [2024-11-20 15:36:30.405397] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.710 [2024-11-20 15:36:30.405402] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.710 [2024-11-20 15:36:30.405407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.710 [2024-11-20 15:36:30.415064] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.710 [2024-11-20 15:36:30.415076] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.710 [2024-11-20 15:36:30.415081] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.415086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.710 [2024-11-20 15:36:30.415101] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.415399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.710 [2024-11-20 15:36:30.415415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.710 [2024-11-20 15:36:30.415424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.710 [2024-11-20 15:36:30.415435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.710 [2024-11-20 15:36:30.415446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.710 [2024-11-20 15:36:30.415454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.710 [2024-11-20 15:36:30.415461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.710 [2024-11-20 15:36:30.415468] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.710 [2024-11-20 15:36:30.415472] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.710 [2024-11-20 15:36:30.415477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.710 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.710 [2024-11-20 15:36:30.425133] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.710 [2024-11-20 15:36:30.425145] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.710 [2024-11-20 15:36:30.425150] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.425155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.710 [2024-11-20 15:36:30.425174] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.425486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.710 [2024-11-20 15:36:30.425499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.710 [2024-11-20 15:36:30.425510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.710 [2024-11-20 15:36:30.425521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.710 [2024-11-20 15:36:30.425535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.710 [2024-11-20 15:36:30.425542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.710 [2024-11-20 15:36:30.425549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.710 [2024-11-20 15:36:30.425555] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.710 [2024-11-20 15:36:30.425560] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.710 [2024-11-20 15:36:30.425565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.710 [2024-11-20 15:36:30.435206] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.710 [2024-11-20 15:36:30.435221] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.710 [2024-11-20 15:36:30.435225] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.435230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.710 [2024-11-20 15:36:30.435245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.710 [2024-11-20 15:36:30.435532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.710 [2024-11-20 15:36:30.435545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.710 [2024-11-20 15:36:30.435553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.710 [2024-11-20 15:36:30.435564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.710 [2024-11-20 15:36:30.435580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.710 [2024-11-20 15:36:30.435587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.710 [2024-11-20 15:36:30.435594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.710 [2024-11-20 15:36:30.435600] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.710 [2024-11-20 15:36:30.435605] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.711 [2024-11-20 15:36:30.435610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.711 [2024-11-20 15:36:30.445276] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.711 [2024-11-20 15:36:30.445288] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.711 [2024-11-20 15:36:30.445293] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.445297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.711 [2024-11-20 15:36:30.445311] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.445489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.711 [2024-11-20 15:36:30.445501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.711 [2024-11-20 15:36:30.445508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.711 [2024-11-20 15:36:30.445523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.711 [2024-11-20 15:36:30.445533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.711 [2024-11-20 15:36:30.445540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.711 [2024-11-20 15:36:30.445547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.711 [2024-11-20 15:36:30.445553] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.711 [2024-11-20 15:36:30.445557] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.711 [2024-11-20 15:36:30.445562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.711 [2024-11-20 15:36:30.455343] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.711 [2024-11-20 15:36:30.455356] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.711 [2024-11-20 15:36:30.455360] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.455365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.711 [2024-11-20 15:36:30.455379] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.455494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.711 [2024-11-20 15:36:30.455506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.711 [2024-11-20 15:36:30.455513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.711 [2024-11-20 15:36:30.455524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.711 [2024-11-20 15:36:30.455534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.711 [2024-11-20 15:36:30.455540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.711 [2024-11-20 15:36:30.455548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.711 [2024-11-20 15:36:30.455553] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.711 [2024-11-20 15:36:30.455558] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.711 [2024-11-20 15:36:30.455562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.711 [2024-11-20 15:36:30.465411] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.711 [2024-11-20 15:36:30.465423] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.711 [2024-11-20 15:36:30.465428] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.465432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.711 [2024-11-20 15:36:30.465446] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.465730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.711 [2024-11-20 15:36:30.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.711 [2024-11-20 15:36:30.465755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.711 [2024-11-20 15:36:30.465766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.711 [2024-11-20 15:36:30.465789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.711 [2024-11-20 15:36:30.465796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.711 [2024-11-20 15:36:30.465804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.711 [2024-11-20 15:36:30.465809] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.711 [2024-11-20 15:36:30.465815] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.711 [2024-11-20 15:36:30.465821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:41.711 [2024-11-20 15:36:30.475474] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.711 [2024-11-20 15:36:30.475483] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.711 [2024-11-20 15:36:30.475486] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.475489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.711 [2024-11-20 15:36:30.475498] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.475674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.711 [2024-11-20 15:36:30.475684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.711 [2024-11-20 15:36:30.475689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.711 [2024-11-20 15:36:30.475697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.711 [2024-11-20 15:36:30.475704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.711 [2024-11-20 15:36:30.475708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.711 [2024-11-20 15:36:30.475713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.711 [2024-11-20 15:36:30.475718] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.711 [2024-11-20 15:36:30.475725] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.711 [2024-11-20 15:36:30.475728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.711 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:41.711 [2024-11-20 15:36:30.485527] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:41.711 [2024-11-20 15:36:30.485537] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:41.711 [2024-11-20 15:36:30.485541] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.485544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.711 [2024-11-20 15:36:30.485554] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.711 [2024-11-20 15:36:30.485746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.711 [2024-11-20 15:36:30.485755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4e10 with addr=10.0.0.2, port=4420 00:26:41.711 [2024-11-20 15:36:30.485761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4e10 is same with the state(6) to be set 00:26:41.711 [2024-11-20 15:36:30.485769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4e10 (9): Bad file descriptor 00:26:41.711 [2024-11-20 15:36:30.485776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.711 [2024-11-20 15:36:30.485780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.711 [2024-11-20 15:36:30.485785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.711 [2024-11-20 15:36:30.485790] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.711 [2024-11-20 15:36:30.485793] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.711 [2024-11-20 15:36:30.485796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.712 [2024-11-20 15:36:30.488950] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:41.712 [2024-11-20 15:36:30.488964] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:41.712 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.712 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:41.712 15:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.655 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.656 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:42.917 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.918 15:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.303 [2024-11-20 15:36:32.825237] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:44.303 [2024-11-20 15:36:32.825255] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:44.303 [2024-11-20 15:36:32.825265] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:44.303 [2024-11-20 15:36:32.913511] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:44.303 [2024-11-20 15:36:33.178850] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:44.303 [2024-11-20 15:36:33.179588] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x202a9b0:1 started. 00:26:44.303 [2024-11-20 15:36:33.180981] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:44.303 [2024-11-20 15:36:33.181007] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.303 [2024-11-20 15:36:33.183112] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x202a9b0 was disconnected and freed. delete nvme_qpair. 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.303 request: 00:26:44.303 { 00:26:44.303 "name": "nvme", 00:26:44.303 "trtype": "tcp", 00:26:44.303 "traddr": "10.0.0.2", 00:26:44.303 "adrfam": "ipv4", 00:26:44.303 "trsvcid": "8009", 00:26:44.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:44.303 "wait_for_attach": true, 00:26:44.303 "method": "bdev_nvme_start_discovery", 00:26:44.303 "req_id": 1 00:26:44.303 } 00:26:44.303 Got JSON-RPC error response 00:26:44.303 response: 00:26:44.303 { 00:26:44.303 "code": -17, 00:26:44.303 "message": "File exists" 00:26:44.303 } 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.303 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 request: 00:26:44.564 { 00:26:44.564 "name": "nvme_second", 00:26:44.564 "trtype": "tcp", 00:26:44.564 "traddr": "10.0.0.2", 00:26:44.564 "adrfam": "ipv4", 00:26:44.564 "trsvcid": "8009", 00:26:44.564 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:44.564 "wait_for_attach": true, 00:26:44.564 "method": "bdev_nvme_start_discovery", 00:26:44.564 "req_id": 1 00:26:44.564 } 00:26:44.564 Got JSON-RPC error response 00:26:44.564 response: 00:26:44.564 { 00:26:44.564 "code": -17, 00:26:44.564 "message": "File exists" 00:26:44.564 } 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:44.564 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.565 15:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.507 [2024-11-20 15:36:34.440504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.507 [2024-11-20 15:36:34.440531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeca0 with addr=10.0.0.2, port=8010 00:26:45.507 [2024-11-20 15:36:34.440543] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:45.507 [2024-11-20 15:36:34.440550] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:45.507 [2024-11-20 15:36:34.440556] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:46.891 [2024-11-20 15:36:35.442969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-20 15:36:35.442994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeca0 with addr=10.0.0.2, port=8010 00:26:46.891 [2024-11-20 15:36:35.443010] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:46.891 [2024-11-20 15:36:35.443016] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:46.891 [2024-11-20 15:36:35.443021] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:47.864 [2024-11-20 15:36:36.444976] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:47.864 request: 00:26:47.864 { 00:26:47.864 "name": "nvme_second", 00:26:47.864 "trtype": "tcp", 00:26:47.864 "traddr": "10.0.0.2", 00:26:47.864 "adrfam": "ipv4", 00:26:47.864 "trsvcid": "8010", 00:26:47.864 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:47.864 "wait_for_attach": false, 00:26:47.864 "attach_timeout_ms": 3000, 00:26:47.864 "method": "bdev_nvme_start_discovery", 00:26:47.864 "req_id": 1 00:26:47.864 } 00:26:47.864 Got JSON-RPC error response 00:26:47.864 response: 00:26:47.864 { 00:26:47.864 "code": -110, 00:26:47.864 "message": "Connection timed out" 00:26:47.864 } 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 725790 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:47.864 rmmod nvme_tcp 00:26:47.864 rmmod nvme_fabrics 00:26:47.864 rmmod nvme_keyring 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 725468 ']' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 725468 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 725468 ']' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 725468 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725468 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725468' 00:26:47.864 killing process with pid 725468 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 725468 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 725468 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.864 15:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:50.410 00:26:50.410 real 0m21.363s 00:26:50.410 user 0m25.605s 00:26:50.410 sys 0m7.244s 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.410 ************************************ 00:26:50.410 END TEST nvmf_host_discovery 00:26:50.410 ************************************ 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.410 ************************************ 00:26:50.410 START TEST nvmf_host_multipath_status 00:26:50.410 ************************************ 00:26:50.410 15:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:50.410 * Looking for test storage... 00:26:50.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:50.410 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:50.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.411 --rc genhtml_branch_coverage=1 00:26:50.411 --rc genhtml_function_coverage=1 00:26:50.411 --rc genhtml_legend=1 00:26:50.411 --rc geninfo_all_blocks=1 00:26:50.411 --rc geninfo_unexecuted_blocks=1 00:26:50.411 00:26:50.411 ' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:50.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.411 --rc genhtml_branch_coverage=1 00:26:50.411 --rc genhtml_function_coverage=1 00:26:50.411 --rc genhtml_legend=1 00:26:50.411 --rc geninfo_all_blocks=1 00:26:50.411 --rc geninfo_unexecuted_blocks=1 00:26:50.411 00:26:50.411 ' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:50.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.411 --rc genhtml_branch_coverage=1 00:26:50.411 --rc genhtml_function_coverage=1 00:26:50.411 --rc genhtml_legend=1 00:26:50.411 --rc geninfo_all_blocks=1 00:26:50.411 --rc geninfo_unexecuted_blocks=1 00:26:50.411 00:26:50.411 ' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:50.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.411 --rc genhtml_branch_coverage=1 00:26:50.411 --rc genhtml_function_coverage=1 00:26:50.411 --rc genhtml_legend=1 00:26:50.411 --rc geninfo_all_blocks=1 00:26:50.411 --rc geninfo_unexecuted_blocks=1 00:26:50.411 00:26:50.411 ' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:50.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:50.411 15:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:58.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:58.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.556 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:58.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:58.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:26:58.557 00:26:58.557 --- 10.0.0.2 ping statistics --- 00:26:58.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.557 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:58.557 00:26:58.557 --- 10.0.0.1 ping statistics --- 00:26:58.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.557 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=732010 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 732010 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 732010 ']' 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.557 15:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.557 [2024-11-20 15:36:46.728622] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:26:58.557 [2024-11-20 15:36:46.728723] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.557 [2024-11-20 15:36:46.832383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.557 [2024-11-20 15:36:46.885020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.557 [2024-11-20 15:36:46.885075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.558 [2024-11-20 15:36:46.885084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.558 [2024-11-20 15:36:46.885092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.558 [2024-11-20 15:36:46.885098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.558 [2024-11-20 15:36:46.886931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.558 [2024-11-20 15:36:46.886933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=732010 00:26:58.819 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:58.819 [2024-11-20 15:36:47.754452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.080 15:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:59.080 Malloc0 00:26:59.080 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:59.341 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.603 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.603 [2024-11-20 15:36:48.558459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:59.865 [2024-11-20 15:36:48.754948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=732518 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 732518 /var/tmp/bdevperf.sock 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 732518 ']' 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.865 15:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:00.808 15:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.808 15:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:00.808 15:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:01.069 15:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:01.330 Nvme0n1 00:27:01.330 15:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:01.591 Nvme0n1 00:27:01.852 15:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:01.852 15:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:03.765 15:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:03.765 15:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:04.026 15:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.026 15:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:05.410 15:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:05.410 15:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:05.410 15:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.410 15:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.410 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.669 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.669 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.669 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.669 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.929 15:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.190 15:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.190 15:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:06.190 15:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:06.450 15:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:06.710 15:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.803 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.065 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.065 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.065 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.065 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.065 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.065 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.066 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.066 15:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.328 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.328 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:08.328 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:08.328 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:08.591 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:08.852 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:09.112 15:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:10.052 15:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:10.052 15:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:10.052 15:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.052 15:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:10.313 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.313 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:10.313 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.313 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.573 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.833 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.833 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.833 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.833 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.094 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.094 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:11.094 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.094 15:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:11.094 15:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.094 15:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:11.094 15:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:11.357 15:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:11.619 15:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:12.563 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:12.563 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:12.563 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.563 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.825 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.086 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.087 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.087 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.087 15:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:13.347 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.347 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:13.347 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.347 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:13.609 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:13.871 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:14.132 15:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:15.074 15:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:15.074 15:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:15.074 15:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.074 15:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.335 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.596 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.596 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.596 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.596 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.856 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.117 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.117 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:16.117 15:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:16.378 15:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:16.639 15:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:17.581 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:17.581 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:17.581 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.581 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.843 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.104 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.104 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.104 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.104 15:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.365 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.625 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.625 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:18.887 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:18.887 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:18.887 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.147 15:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:20.089 15:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:20.089 15:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:20.089 15:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.089 15:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.350 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.350 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:20.350 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.350 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.610 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:20.871 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.871 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:20.871 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.871 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.132 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.132 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.132 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.132 15:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.132 15:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.132 15:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:21.132 15:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:21.393 15:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:21.654 15:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:22.597 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:22.597 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:22.597 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.597 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:22.858 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.858 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:22.858 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.858 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.120 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.120 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.120 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.120 15:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.120 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.120 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.120 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.120 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.382 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.382 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.382 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.382 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:23.643 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:23.903 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:24.164 15:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:25.106 15:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:25.106 15:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:25.106 15:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.106 15:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.367 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.628 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.628 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.628 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.628 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:25.889 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.889 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:25.889 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.889 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.149 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.149 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.149 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.149 15:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.149 15:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.149 15:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:26.149 15:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:26.410 15:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:26.671 15:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.612 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.873 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:27.873 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.873 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.873 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.133 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.133 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.133 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.133 15:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.394 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.654 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 732518 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 732518 ']' 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 732518 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 732518 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 732518' 00:27:28.655 killing process with pid 732518 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 732518 00:27:28.655 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 732518 00:27:28.943 { 00:27:28.943 "results": [ 00:27:28.943 { 00:27:28.943 "job": "Nvme0n1", 00:27:28.943 "core_mask": "0x4", 00:27:28.943 "workload": "verify", 00:27:28.943 "status": "terminated", 00:27:28.943 "verify_range": { 00:27:28.943 "start": 0, 00:27:28.943 "length": 16384 00:27:28.943 }, 00:27:28.943 "queue_depth": 128, 00:27:28.943 "io_size": 4096, 00:27:28.943 "runtime": 26.923179, 00:27:28.943 "iops": 11744.304043738668, 00:27:28.943 "mibps": 45.87618767085417, 00:27:28.943 "io_failed": 0, 00:27:28.943 "io_timeout": 0, 00:27:28.943 "avg_latency_us": 10862.584011250477, 00:27:28.943 "min_latency_us": 105.38666666666667, 00:27:28.943 "max_latency_us": 3467291.3066666666 00:27:28.943 } 00:27:28.943 ], 00:27:28.943 "core_count": 1 00:27:28.943 } 00:27:28.943 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 732518 00:27:28.943 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:28.943 [2024-11-20 15:36:48.846382] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:27:28.943 [2024-11-20 15:36:48.846462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732518 ] 00:27:28.943 [2024-11-20 15:36:48.940892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.943 [2024-11-20 15:36:48.991693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.943 Running I/O for 90 seconds... 00:27:28.943 10617.00 IOPS, 41.47 MiB/s [2024-11-20T14:37:17.903Z] 10928.00 IOPS, 42.69 MiB/s [2024-11-20T14:37:17.903Z] 11018.00 IOPS, 43.04 MiB/s [2024-11-20T14:37:17.903Z] 11214.75 IOPS, 43.81 MiB/s [2024-11-20T14:37:17.903Z] 11550.00 IOPS, 45.12 MiB/s [2024-11-20T14:37:17.903Z] 11788.00 IOPS, 46.05 MiB/s [2024-11-20T14:37:17.903Z] 11947.14 IOPS, 46.67 MiB/s [2024-11-20T14:37:17.903Z] 12072.25 IOPS, 47.16 MiB/s [2024-11-20T14:37:17.903Z] 12176.44 IOPS, 47.56 MiB/s [2024-11-20T14:37:17.903Z] 12263.20 IOPS, 47.90 MiB/s [2024-11-20T14:37:17.903Z] 12315.55 IOPS, 48.11 MiB/s [2024-11-20T14:37:17.903Z] [2024-11-20 15:37:02.660147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.943 [2024-11-20 15:37:02.660184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.943 [2024-11-20 15:37:02.660209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.943 [2024-11-20 15:37:02.660226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.943 [2024-11-20 15:37:02.660242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.943 [2024-11-20 15:37:02.660258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.943 [2024-11-20 15:37:02.660273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.943 [2024-11-20 15:37:02.660288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.943 [2024-11-20 15:37:02.660298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.944 [2024-11-20 15:37:02.660897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.944 [2024-11-20 15:37:02.660902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.660912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.660918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.660928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.660933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.660943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.660948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.660959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.660964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.660974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.660979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.660989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.660994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.661994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.662010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.662025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.662040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.662055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.662070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.945 [2024-11-20 15:37:02.662087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.945 [2024-11-20 15:37:02.662097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.662984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.662996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.663002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.663014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.663019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.663031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.946 [2024-11-20 15:37:02.663037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.946 [2024-11-20 15:37:02.663047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.947 [2024-11-20 15:37:02.663145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.663579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.663584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.675202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.675224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.675236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.675252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.675257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.675267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.947 [2024-11-20 15:37:02.675272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.947 [2024-11-20 15:37:02.675283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.675288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.675299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.675304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.675314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.675319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.675329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.675345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.675350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.676941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.676955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.676967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.676972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.676983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.676988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.676998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.948 [2024-11-20 15:37:02.677426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.948 [2024-11-20 15:37:02.677441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.677979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.677993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.949 [2024-11-20 15:37:02.678208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.949 [2024-11-20 15:37:02.678222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.678466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.678473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.950 12356.00 IOPS, 48.27 MiB/s [2024-11-20T14:37:17.910Z] [2024-11-20 15:37:02.679273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.950 [2024-11-20 15:37:02.679518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.950 [2024-11-20 15:37:02.679798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.950 [2024-11-20 15:37:02.679805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.679982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.679989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.680983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.680992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.681006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.681013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.681027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.681033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.681047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.681054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.681067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.951 [2024-11-20 15:37:02.681074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.951 [2024-11-20 15:37:02.681087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.681637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.681643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.688897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.688928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.688950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.688960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.688979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.688989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.689008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.689017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.689036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.689045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.689064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.689078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.689097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.952 [2024-11-20 15:37:02.689107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.952 [2024-11-20 15:37:02.689125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.689500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.689510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.690978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.690987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.691015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.691043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.691071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.691109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.691138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.953 [2024-11-20 15:37:02.691172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.953 [2024-11-20 15:37:02.691192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.954 [2024-11-20 15:37:02.691284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.691972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.691981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.954 [2024-11-20 15:37:02.692239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.954 [2024-11-20 15:37:02.692259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.692268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.693973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.693992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.955 [2024-11-20 15:37:02.694180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.955 [2024-11-20 15:37:02.694199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.694933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.694943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.695831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.695848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.695868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.695878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.695896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.695906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.695924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.695934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.695952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.695961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.695984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.695994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.696025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.696052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.696080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.696108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.956 [2024-11-20 15:37:02.696171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.956 [2024-11-20 15:37:02.696190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.957 [2024-11-20 15:37:02.696661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.696974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.696993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.957 [2024-11-20 15:37:02.697179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.957 [2024-11-20 15:37:02.697198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.697630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.697640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.698978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.698986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.699001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.699008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.699023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.958 [2024-11-20 15:37:02.699030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.958 [2024-11-20 15:37:02.699045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:02.699849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:02.699856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:03.106311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:03.106382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:03.106431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:03.106455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.959 [2024-11-20 15:37:03.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.959 [2024-11-20 15:37:03.106523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.106570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.106593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.106639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.106662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.107559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.107613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.107697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.107723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.107781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.107803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.107861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.107883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.107942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.107964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.108931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.108954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.109927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.109950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.960 [2024-11-20 15:37:03.110033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.110117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.110209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.110374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.110457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.960 [2024-11-20 15:37:03.110539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.960 [2024-11-20 15:37:03.110597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.110621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.110678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.110706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.110765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.110788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.110847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.110870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.110929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.110954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.111935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.111959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.112770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.112794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:03.113299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:03.113333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.961 11405.54 IOPS, 44.55 MiB/s [2024-11-20T14:37:17.921Z] 10590.86 IOPS, 41.37 MiB/s [2024-11-20T14:37:17.921Z] 9884.80 IOPS, 38.61 MiB/s [2024-11-20T14:37:17.921Z] 9729.69 IOPS, 38.01 MiB/s [2024-11-20T14:37:17.921Z] 9912.53 IOPS, 38.72 MiB/s [2024-11-20T14:37:17.921Z] 10299.50 IOPS, 40.23 MiB/s [2024-11-20T14:37:17.921Z] 10651.21 IOPS, 41.61 MiB/s [2024-11-20T14:37:17.921Z] 10905.55 IOPS, 42.60 MiB/s [2024-11-20T14:37:17.921Z] 11000.62 IOPS, 42.97 MiB/s [2024-11-20T14:37:17.921Z] 11091.14 IOPS, 43.32 MiB/s [2024-11-20T14:37:17.921Z] 11298.48 IOPS, 44.13 MiB/s [2024-11-20T14:37:17.921Z] 11531.46 IOPS, 45.04 MiB/s [2024-11-20T14:37:17.921Z] [2024-11-20 15:37:15.356024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:15.356057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:15.356075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:15.356081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:15.356092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:15.356097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:15.356108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.961 [2024-11-20 15:37:15.356113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.961 [2024-11-20 15:37:15.356124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.962 [2024-11-20 15:37:15.356250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.962 [2024-11-20 15:37:15.356266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.356325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.356331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.962 [2024-11-20 15:37:15.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.962 [2024-11-20 15:37:15.357834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.962 [2024-11-20 15:37:15.357840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.357851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.357855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.357866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.357871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.357882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.357887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.357899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.357904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.357914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.357919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.357930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.357936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.358546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.963 [2024-11-20 15:37:15.358985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.358995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.963 [2024-11-20 15:37:15.359001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.963 [2024-11-20 15:37:15.359011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.359859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.359866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.360118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.360134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.360149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.360171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.964 [2024-11-20 15:37:15.360186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.360202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.360217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.964 [2024-11-20 15:37:15.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.964 [2024-11-20 15:37:15.360453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.360618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.360637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.360652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.360668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.360678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.360684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.361903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.361915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.361927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.361933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.361944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.361949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.361959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.361964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.361974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.361980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.361990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.361995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.362011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.362026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.362042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.362060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.362076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.965 [2024-11-20 15:37:15.362109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.965 [2024-11-20 15:37:15.362281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.965 [2024-11-20 15:37:15.362286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.362296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.362302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.362312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.362318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.363691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.363707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.363723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.363738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.363812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.363817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.364760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.364777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.364871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.364887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.364902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.966 [2024-11-20 15:37:15.364917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.966 [2024-11-20 15:37:15.364949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.966 [2024-11-20 15:37:15.364960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.364965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.364976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.364981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.364992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.364997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.365416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.365442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.365448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.375523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.967 [2024-11-20 15:37:15.375550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.375572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.375593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.375614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.375639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.375660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.967 [2024-11-20 15:37:15.375681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.967 [2024-11-20 15:37:15.375695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.375701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.375722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.375887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.375897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.376821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.376835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.376841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.378463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.378488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.378513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.968 [2024-11-20 15:37:15.378535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.378556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.968 [2024-11-20 15:37:15.378578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.968 [2024-11-20 15:37:15.378592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.378836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.378933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.378940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.380661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.380688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.380709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.380731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.969 [2024-11-20 15:37:15.380939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.380962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.380984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.380998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.381005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.381019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.381025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.381039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.381047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.381061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.969 [2024-11-20 15:37:15.381069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.969 [2024-11-20 15:37:15.381083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.381186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.381208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.381276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.381320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.381949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.381986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.381993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.382007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.382014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.382028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.382036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.382050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.382056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.382071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.382078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.382091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.382101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.382116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.382123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.970 [2024-11-20 15:37:15.383412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.970 [2024-11-20 15:37:15.383427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.970 [2024-11-20 15:37:15.383435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.383852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.383859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.971 [2024-11-20 15:37:15.385531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.971 [2024-11-20 15:37:15.385616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.971 [2024-11-20 15:37:15.385630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.385659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.385681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.385789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.385832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.385845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.385853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.387756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.387826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.387843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.387861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.387996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.388002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.388036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.388054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.388143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.388163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.972 [2024-11-20 15:37:15.388198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.972 [2024-11-20 15:37:15.388231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.972 [2024-11-20 15:37:15.388242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.388403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.388421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.388439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.388457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.388504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.388510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.389941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.389958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.389972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.389978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.389990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.389996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.390178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.390301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.973 [2024-11-20 15:37:15.390318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.390330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.390336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.973 [2024-11-20 15:37:15.391271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.973 [2024-11-20 15:37:15.391284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.974 [2024-11-20 15:37:15.391841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.391904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.391911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.392399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.392411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.392423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.974 [2024-11-20 15:37:15.392430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.974 [2024-11-20 15:37:15.392441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.392447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.392459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.392467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.392479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.392497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.392503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.975 [2024-11-20 15:37:15.394589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.975 [2024-11-20 15:37:15.394601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.975 [2024-11-20 15:37:15.394607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.394625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.394643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.394662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.394679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.394698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.394716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.394734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.394751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.394763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.394769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.976 [2024-11-20 15:37:15.396821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.976 [2024-11-20 15:37:15.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.976 [2024-11-20 15:37:15.396887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.396893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.396905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.396911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.396922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.396928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.396940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.396946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.396958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.396964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.396975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.396982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.396993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.396999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.397850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.397869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.397886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.397903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.397920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.397939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.397974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.397985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.397991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.977 [2024-11-20 15:37:15.398634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.977 [2024-11-20 15:37:15.398646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.977 [2024-11-20 15:37:15.398652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.398670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.398688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.398923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.398940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.398956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.398972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.398988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.398999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.399004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.399015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.399020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.399031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.399037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.399050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.399056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.399067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.399073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.399083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.399089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.399100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.399106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.978 [2024-11-20 15:37:15.400414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.978 [2024-11-20 15:37:15.400474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.978 [2024-11-20 15:37:15.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.400545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.400561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.400629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.400646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.400656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.400662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.979 [2024-11-20 15:37:15.402848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.979 [2024-11-20 15:37:15.402864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.979 [2024-11-20 15:37:15.402875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.402881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.402897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.402914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.402930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.402945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.402961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.402977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.402988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.402994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.403849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.403860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.403866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.405040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.980 [2024-11-20 15:37:15.405058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.405073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.405091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.405107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.405122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.980 [2024-11-20 15:37:15.405132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.980 [2024-11-20 15:37:15.405138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.405982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.405992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.406014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.406030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.406046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.406062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.406079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.406096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.406112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.981 [2024-11-20 15:37:15.406128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.406144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.981 [2024-11-20 15:37:15.406155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.981 [2024-11-20 15:37:15.406166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.406199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.406216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.406581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.406599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.406715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.406732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.406986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.406997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.407002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.407019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.407035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.407068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.407084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.407100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.407116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.407133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.407143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.407149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.408277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.408326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.982 [2024-11-20 15:37:15.408440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.982 [2024-11-20 15:37:15.408467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.982 [2024-11-20 15:37:15.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.408858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.408864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.410585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.983 [2024-11-20 15:37:15.410700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.410715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.410731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.983 [2024-11-20 15:37:15.410742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.983 [2024-11-20 15:37:15.410747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.410962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.410989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.410994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.984 [2024-11-20 15:37:15.411656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.984 [2024-11-20 15:37:15.411766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.984 [2024-11-20 15:37:15.411771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.985 [2024-11-20 15:37:15.411787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.411803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.411819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.411835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.411851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.411869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.411886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.411897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.985 [2024-11-20 15:37:15.411903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.985 [2024-11-20 15:37:15.412476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.985 [2024-11-20 15:37:15.412492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.985 [2024-11-20 15:37:15.412512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.985 [2024-11-20 15:37:15.412807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.985 [2024-11-20 15:37:15.412840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.985 [2024-11-20 15:37:15.412850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.412856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.412866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.412872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.412882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.412889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.412899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.412904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.412915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.412923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.412934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.412940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.412952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.412957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.413994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.986 [2024-11-20 15:37:15.414398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:28.986 [2024-11-20 15:37:15.414408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.986 [2024-11-20 15:37:15.414414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.414430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.414447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.414464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.414479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.414495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.414511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.414527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.414543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.414561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.414572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.414578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.987 [2024-11-20 15:37:15.416656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.987 [2024-11-20 15:37:15.416720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:28.987 [2024-11-20 15:37:15.416731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.416770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.416819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.416852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.416884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.416933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.416950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.416994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.416999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.417016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.417032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.417049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.417065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.417082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.417946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.417965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.417981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.417995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.418001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.418017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.418034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.418051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.418067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.418083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.988 [2024-11-20 15:37:15.418099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:28.988 [2024-11-20 15:37:15.418900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-20 15:37:15.418913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.418934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.418940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.418956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.418966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.418971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.418982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.418987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-20 15:37:15.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:28.989 [2024-11-20 15:37:15.419361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-20 15:37:15.419367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.419377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.419383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.419394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.419403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.419413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.419419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.419429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.419436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.419446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.419452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.419463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.419468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-20 15:37:15.420188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.420204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.420221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.420237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:28.990 [2024-11-20 15:37:15.420248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-20 15:37:15.420254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:28.990 11671.76 IOPS, 45.59 MiB/s [2024-11-20T14:37:17.950Z] 11723.31 IOPS, 45.79 MiB/s [2024-11-20T14:37:17.950Z] Received shutdown signal, test time was about 26.923788 seconds 00:27:28.990 00:27:28.990 Latency(us) 00:27:28.990 [2024-11-20T14:37:17.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.990 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:28.990 Verification LBA range: start 0x0 length 0x4000 00:27:28.990 Nvme0n1 : 26.92 11744.30 45.88 0.00 0.00 10862.58 105.39 3467291.31 00:27:28.990 [2024-11-20T14:37:17.950Z] =================================================================================================================== 00:27:28.990 [2024-11-20T14:37:17.950Z] Total : 11744.30 45.88 0.00 0.00 10862.58 105.39 3467291.31 00:27:28.990 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:29.252 rmmod nvme_tcp 00:27:29.252 rmmod nvme_fabrics 00:27:29.252 rmmod nvme_keyring 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 732010 ']' 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 732010 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 732010 ']' 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 732010 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.252 15:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 732010 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 732010' 00:27:29.252 killing process with pid 732010 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 732010 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 732010 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.252 15:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.796 00:27:31.796 real 0m41.324s 00:27:31.796 user 1m46.571s 00:27:31.796 sys 0m11.528s 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:31.796 ************************************ 00:27:31.796 END TEST nvmf_host_multipath_status 00:27:31.796 ************************************ 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.796 ************************************ 00:27:31.796 START TEST nvmf_discovery_remove_ifc 00:27:31.796 ************************************ 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:31.796 * Looking for test storage... 00:27:31.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.796 --rc genhtml_branch_coverage=1 00:27:31.796 --rc genhtml_function_coverage=1 00:27:31.796 --rc genhtml_legend=1 00:27:31.796 --rc geninfo_all_blocks=1 00:27:31.796 --rc geninfo_unexecuted_blocks=1 00:27:31.796 00:27:31.796 ' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.796 --rc genhtml_branch_coverage=1 00:27:31.796 --rc genhtml_function_coverage=1 00:27:31.796 --rc genhtml_legend=1 00:27:31.796 --rc geninfo_all_blocks=1 00:27:31.796 --rc geninfo_unexecuted_blocks=1 00:27:31.796 00:27:31.796 ' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.796 --rc genhtml_branch_coverage=1 00:27:31.796 --rc genhtml_function_coverage=1 00:27:31.796 --rc genhtml_legend=1 00:27:31.796 --rc geninfo_all_blocks=1 00:27:31.796 --rc geninfo_unexecuted_blocks=1 00:27:31.796 00:27:31.796 ' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.796 --rc genhtml_branch_coverage=1 00:27:31.796 --rc genhtml_function_coverage=1 00:27:31.796 --rc genhtml_legend=1 00:27:31.796 --rc geninfo_all_blocks=1 00:27:31.796 --rc geninfo_unexecuted_blocks=1 00:27:31.796 00:27:31.796 ' 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.796 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.797 15:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:39.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:39.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:39.933 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:39.933 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:27:39.933 00:27:39.933 --- 10.0.0.2 ping statistics --- 00:27:39.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.933 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:27:39.933 15:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:27:39.933 00:27:39.933 --- 10.0.0.1 ping statistics --- 00:27:39.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.933 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.933 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=742562 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 742562 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 742562 ']' 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.934 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.934 [2024-11-20 15:37:28.113974] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:27:39.934 [2024-11-20 15:37:28.114038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.934 [2024-11-20 15:37:28.215601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.934 [2024-11-20 15:37:28.265927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.934 [2024-11-20 15:37:28.265982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.934 [2024-11-20 15:37:28.265995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.934 [2024-11-20 15:37:28.266002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.934 [2024-11-20 15:37:28.266008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.934 [2024-11-20 15:37:28.266779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.195 15:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.195 [2024-11-20 15:37:29.004464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.195 [2024-11-20 15:37:29.012774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:40.195 null0 00:27:40.195 [2024-11-20 15:37:29.044689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=742616 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 742616 /tmp/host.sock 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 742616 ']' 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:40.195 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.195 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.195 [2024-11-20 15:37:29.121861] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:27:40.195 [2024-11-20 15:37:29.121929] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742616 ] 00:27:40.455 [2024-11-20 15:37:29.213403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.455 [2024-11-20 15:37:29.266741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.026 15:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.287 15:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.287 15:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:41.287 15:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.287 15:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.228 [2024-11-20 15:37:31.105360] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:42.228 [2024-11-20 15:37:31.105398] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:42.228 [2024-11-20 15:37:31.105416] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:42.489 [2024-11-20 15:37:31.192679] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:42.489 [2024-11-20 15:37:31.372932] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:42.489 [2024-11-20 15:37:31.374191] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x58c410:1 started. 00:27:42.489 [2024-11-20 15:37:31.376013] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:42.489 [2024-11-20 15:37:31.376082] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:42.489 [2024-11-20 15:37:31.376108] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:42.489 [2024-11-20 15:37:31.376127] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:42.489 [2024-11-20 15:37:31.376152] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.489 [2024-11-20 15:37:31.424085] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x58c410 was disconnected and freed. delete nvme_qpair. 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:42.489 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:42.749 15:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.691 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.951 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.951 15:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:44.893 15:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:45.836 15:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:47.219 15:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.161 [2024-11-20 15:37:36.816109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:48.161 [2024-11-20 15:37:36.816144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.161 [2024-11-20 15:37:36.816153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.161 [2024-11-20 15:37:36.816163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.161 [2024-11-20 15:37:36.816169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.161 [2024-11-20 15:37:36.816175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.161 [2024-11-20 15:37:36.816180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.161 [2024-11-20 15:37:36.816186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.161 [2024-11-20 15:37:36.816191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.161 [2024-11-20 15:37:36.816197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.161 [2024-11-20 15:37:36.816203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.161 [2024-11-20 15:37:36.816208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x568c00 is same with the state(6) to be set 00:27:48.161 [2024-11-20 15:37:36.826130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x568c00 (9): Bad file descriptor 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 [2024-11-20 15:37:36.836169] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:48.161 [2024-11-20 15:37:36.836179] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:48.161 [2024-11-20 15:37:36.836183] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:48.161 [2024-11-20 15:37:36.836187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:48.161 [2024-11-20 15:37:36.836204] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:48.161 15:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.102 [2024-11-20 15:37:37.883235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:49.102 [2024-11-20 15:37:37.883326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x568c00 with addr=10.0.0.2, port=4420 00:27:49.102 [2024-11-20 15:37:37.883357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x568c00 is same with the state(6) to be set 00:27:49.102 [2024-11-20 15:37:37.883413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x568c00 (9): Bad file descriptor 00:27:49.102 [2024-11-20 15:37:37.884537] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:49.102 [2024-11-20 15:37:37.884607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:49.102 [2024-11-20 15:37:37.884629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:49.102 [2024-11-20 15:37:37.884652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:49.102 [2024-11-20 15:37:37.884672] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:49.102 [2024-11-20 15:37:37.884688] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:49.102 [2024-11-20 15:37:37.884702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:49.102 [2024-11-20 15:37:37.884725] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:49.102 [2024-11-20 15:37:37.884740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:49.102 15:37:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.102 15:37:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.102 15:37:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:50.048 [2024-11-20 15:37:38.887164] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:50.048 [2024-11-20 15:37:38.887180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.048 [2024-11-20 15:37:38.887189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.048 [2024-11-20 15:37:38.887199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.049 [2024-11-20 15:37:38.887204] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:50.049 [2024-11-20 15:37:38.887210] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:50.049 [2024-11-20 15:37:38.887214] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:50.049 [2024-11-20 15:37:38.887217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.049 [2024-11-20 15:37:38.887235] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:50.049 [2024-11-20 15:37:38.887253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.049 [2024-11-20 15:37:38.887259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.049 [2024-11-20 15:37:38.887267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.049 [2024-11-20 15:37:38.887272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.049 [2024-11-20 15:37:38.887278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.049 [2024-11-20 15:37:38.887285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.049 [2024-11-20 15:37:38.887291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.049 [2024-11-20 15:37:38.887296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.049 [2024-11-20 15:37:38.887302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.049 [2024-11-20 15:37:38.887307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.049 [2024-11-20 15:37:38.887312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:50.049 [2024-11-20 15:37:38.887666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x558340 (9): Bad file descriptor 00:27:50.049 [2024-11-20 15:37:38.888676] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:50.049 [2024-11-20 15:37:38.888684] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.049 15:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:50.344 15:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:51.378 15:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.948 [2024-11-20 15:37:40.898952] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:51.948 [2024-11-20 15:37:40.898967] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:51.948 [2024-11-20 15:37:40.898977] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:52.208 [2024-11-20 15:37:40.989242] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:52.208 [2024-11-20 15:37:41.129269] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:52.208 [2024-11-20 15:37:41.129969] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x567a60:1 started. 00:27:52.208 [2024-11-20 15:37:41.130874] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:52.208 [2024-11-20 15:37:41.130902] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:52.208 [2024-11-20 15:37:41.130916] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:52.208 [2024-11-20 15:37:41.130927] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:52.208 [2024-11-20 15:37:41.130933] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:52.208 [2024-11-20 15:37:41.136967] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x567a60 was disconnected and freed. delete nvme_qpair. 00:27:52.208 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.469 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 742616 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 742616 ']' 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 742616 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 742616 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 742616' 00:27:52.470 killing process with pid 742616 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 742616 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 742616 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.470 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.470 rmmod nvme_tcp 00:27:52.470 rmmod nvme_fabrics 00:27:52.470 rmmod nvme_keyring 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 742562 ']' 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 742562 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 742562 ']' 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 742562 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 742562 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 742562' 00:27:52.732 killing process with pid 742562 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 742562 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 742562 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.732 15:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.279 00:27:55.279 real 0m23.393s 00:27:55.279 user 0m27.400s 00:27:55.279 sys 0m7.114s 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.279 ************************************ 00:27:55.279 END TEST nvmf_discovery_remove_ifc 00:27:55.279 ************************************ 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.279 ************************************ 00:27:55.279 START TEST nvmf_identify_kernel_target 00:27:55.279 ************************************ 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:55.279 * Looking for test storage... 00:27:55.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.279 --rc genhtml_branch_coverage=1 00:27:55.279 --rc genhtml_function_coverage=1 00:27:55.279 --rc genhtml_legend=1 00:27:55.279 --rc geninfo_all_blocks=1 00:27:55.279 --rc geninfo_unexecuted_blocks=1 00:27:55.279 00:27:55.279 ' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.279 --rc genhtml_branch_coverage=1 00:27:55.279 --rc genhtml_function_coverage=1 00:27:55.279 --rc genhtml_legend=1 00:27:55.279 --rc geninfo_all_blocks=1 00:27:55.279 --rc geninfo_unexecuted_blocks=1 00:27:55.279 00:27:55.279 ' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.279 --rc genhtml_branch_coverage=1 00:27:55.279 --rc genhtml_function_coverage=1 00:27:55.279 --rc genhtml_legend=1 00:27:55.279 --rc geninfo_all_blocks=1 00:27:55.279 --rc geninfo_unexecuted_blocks=1 00:27:55.279 00:27:55.279 ' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.279 --rc genhtml_branch_coverage=1 00:27:55.279 --rc genhtml_function_coverage=1 00:27:55.279 --rc genhtml_legend=1 00:27:55.279 --rc geninfo_all_blocks=1 00:27:55.279 --rc geninfo_unexecuted_blocks=1 00:27:55.279 00:27:55.279 ' 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.279 15:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.279 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:55.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.280 15:37:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:03.425 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:03.425 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:03.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:03.425 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.425 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:28:03.426 00:28:03.426 --- 10.0.0.2 ping statistics --- 00:28:03.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.426 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:03.426 00:28:03.426 --- 10.0.0.1 ping statistics --- 00:28:03.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.426 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:03.426 15:37:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:05.970 Waiting for block devices as requested 00:28:05.970 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:05.970 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:05.970 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:06.243 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:06.244 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:06.244 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:06.518 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:06.518 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:06.518 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:06.778 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:06.778 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:07.039 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:07.039 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:07.039 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:07.299 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:07.299 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:07.299 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:07.559 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:07.820 No valid GPT data, bailing 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:07.820 00:28:07.820 Discovery Log Number of Records 2, Generation counter 2 00:28:07.820 =====Discovery Log Entry 0====== 00:28:07.820 trtype: tcp 00:28:07.820 adrfam: ipv4 00:28:07.820 subtype: current discovery subsystem 00:28:07.820 treq: not specified, sq flow control disable supported 00:28:07.820 portid: 1 00:28:07.820 trsvcid: 4420 00:28:07.820 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:07.820 traddr: 10.0.0.1 00:28:07.820 eflags: none 00:28:07.820 sectype: none 00:28:07.820 =====Discovery Log Entry 1====== 00:28:07.820 trtype: tcp 00:28:07.820 adrfam: ipv4 00:28:07.820 subtype: nvme subsystem 00:28:07.820 treq: not specified, sq flow control disable supported 00:28:07.820 portid: 1 00:28:07.820 trsvcid: 4420 00:28:07.820 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:07.820 traddr: 10.0.0.1 00:28:07.820 eflags: none 00:28:07.820 sectype: none 00:28:07.820 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:07.820 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:07.820 ===================================================== 00:28:07.820 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:07.820 ===================================================== 00:28:07.820 Controller Capabilities/Features 00:28:07.820 ================================ 00:28:07.820 Vendor ID: 0000 00:28:07.820 Subsystem Vendor ID: 0000 00:28:07.820 Serial Number: 3cfdde8cf31a549ba1e4 00:28:07.820 Model Number: Linux 00:28:07.820 Firmware Version: 6.8.9-20 00:28:07.820 Recommended Arb Burst: 0 00:28:07.820 IEEE OUI Identifier: 00 00 00 00:28:07.820 Multi-path I/O 00:28:07.820 May have multiple subsystem ports: No 00:28:07.820 May have multiple controllers: No 00:28:07.820 Associated with SR-IOV VF: No 00:28:07.820 Max Data Transfer Size: Unlimited 00:28:07.820 Max Number of Namespaces: 0 00:28:07.820 Max Number of I/O Queues: 1024 00:28:07.820 NVMe Specification Version (VS): 1.3 00:28:07.820 NVMe Specification Version (Identify): 1.3 00:28:07.820 Maximum Queue Entries: 1024 00:28:07.820 Contiguous Queues Required: No 00:28:07.820 Arbitration Mechanisms Supported 00:28:07.820 Weighted Round Robin: Not Supported 00:28:07.820 Vendor Specific: Not Supported 00:28:07.820 Reset Timeout: 7500 ms 00:28:07.820 Doorbell Stride: 4 bytes 00:28:07.820 NVM Subsystem Reset: Not Supported 00:28:07.820 Command Sets Supported 00:28:07.820 NVM Command Set: Supported 00:28:07.820 Boot Partition: Not Supported 00:28:07.820 Memory Page Size Minimum: 4096 bytes 00:28:07.820 Memory Page Size Maximum: 4096 bytes 00:28:07.820 Persistent Memory Region: Not Supported 00:28:07.820 Optional Asynchronous Events Supported 00:28:07.820 Namespace Attribute Notices: Not Supported 00:28:07.820 Firmware Activation Notices: Not Supported 00:28:07.820 ANA Change Notices: Not Supported 00:28:07.820 PLE Aggregate Log Change Notices: Not Supported 00:28:07.820 LBA Status Info Alert Notices: Not Supported 00:28:07.820 EGE Aggregate Log Change Notices: Not Supported 00:28:07.820 Normal NVM Subsystem Shutdown event: Not Supported 00:28:07.820 Zone Descriptor Change Notices: Not Supported 00:28:07.820 Discovery Log Change Notices: Supported 00:28:07.820 Controller Attributes 00:28:07.820 128-bit Host Identifier: Not Supported 00:28:07.820 Non-Operational Permissive Mode: Not Supported 00:28:07.820 NVM Sets: Not Supported 00:28:07.820 Read Recovery Levels: Not Supported 00:28:07.820 Endurance Groups: Not Supported 00:28:07.820 Predictable Latency Mode: Not Supported 00:28:07.820 Traffic Based Keep ALive: Not Supported 00:28:07.820 Namespace Granularity: Not Supported 00:28:07.820 SQ Associations: Not Supported 00:28:07.820 UUID List: Not Supported 00:28:07.820 Multi-Domain Subsystem: Not Supported 00:28:07.820 Fixed Capacity Management: Not Supported 00:28:07.820 Variable Capacity Management: Not Supported 00:28:07.820 Delete Endurance Group: Not Supported 00:28:07.820 Delete NVM Set: Not Supported 00:28:07.820 Extended LBA Formats Supported: Not Supported 00:28:07.820 Flexible Data Placement Supported: Not Supported 00:28:07.820 00:28:07.820 Controller Memory Buffer Support 00:28:07.820 ================================ 00:28:07.820 Supported: No 00:28:07.820 00:28:07.820 Persistent Memory Region Support 00:28:07.820 ================================ 00:28:07.820 Supported: No 00:28:07.820 00:28:07.820 Admin Command Set Attributes 00:28:07.820 ============================ 00:28:07.820 Security Send/Receive: Not Supported 00:28:07.820 Format NVM: Not Supported 00:28:07.820 Firmware Activate/Download: Not Supported 00:28:07.820 Namespace Management: Not Supported 00:28:07.820 Device Self-Test: Not Supported 00:28:07.820 Directives: Not Supported 00:28:07.820 NVMe-MI: Not Supported 00:28:07.820 Virtualization Management: Not Supported 00:28:07.820 Doorbell Buffer Config: Not Supported 00:28:07.820 Get LBA Status Capability: Not Supported 00:28:07.820 Command & Feature Lockdown Capability: Not Supported 00:28:07.820 Abort Command Limit: 1 00:28:07.820 Async Event Request Limit: 1 00:28:07.820 Number of Firmware Slots: N/A 00:28:07.820 Firmware Slot 1 Read-Only: N/A 00:28:08.082 Firmware Activation Without Reset: N/A 00:28:08.082 Multiple Update Detection Support: N/A 00:28:08.082 Firmware Update Granularity: No Information Provided 00:28:08.082 Per-Namespace SMART Log: No 00:28:08.082 Asymmetric Namespace Access Log Page: Not Supported 00:28:08.082 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:08.082 Command Effects Log Page: Not Supported 00:28:08.082 Get Log Page Extended Data: Supported 00:28:08.082 Telemetry Log Pages: Not Supported 00:28:08.082 Persistent Event Log Pages: Not Supported 00:28:08.082 Supported Log Pages Log Page: May Support 00:28:08.082 Commands Supported & Effects Log Page: Not Supported 00:28:08.082 Feature Identifiers & Effects Log Page:May Support 00:28:08.082 NVMe-MI Commands & Effects Log Page: May Support 00:28:08.082 Data Area 4 for Telemetry Log: Not Supported 00:28:08.082 Error Log Page Entries Supported: 1 00:28:08.082 Keep Alive: Not Supported 00:28:08.082 00:28:08.082 NVM Command Set Attributes 00:28:08.082 ========================== 00:28:08.082 Submission Queue Entry Size 00:28:08.082 Max: 1 00:28:08.082 Min: 1 00:28:08.082 Completion Queue Entry Size 00:28:08.082 Max: 1 00:28:08.082 Min: 1 00:28:08.082 Number of Namespaces: 0 00:28:08.082 Compare Command: Not Supported 00:28:08.082 Write Uncorrectable Command: Not Supported 00:28:08.082 Dataset Management Command: Not Supported 00:28:08.082 Write Zeroes Command: Not Supported 00:28:08.082 Set Features Save Field: Not Supported 00:28:08.082 Reservations: Not Supported 00:28:08.082 Timestamp: Not Supported 00:28:08.082 Copy: Not Supported 00:28:08.082 Volatile Write Cache: Not Present 00:28:08.082 Atomic Write Unit (Normal): 1 00:28:08.082 Atomic Write Unit (PFail): 1 00:28:08.082 Atomic Compare & Write Unit: 1 00:28:08.082 Fused Compare & Write: Not Supported 00:28:08.082 Scatter-Gather List 00:28:08.082 SGL Command Set: Supported 00:28:08.082 SGL Keyed: Not Supported 00:28:08.082 SGL Bit Bucket Descriptor: Not Supported 00:28:08.082 SGL Metadata Pointer: Not Supported 00:28:08.082 Oversized SGL: Not Supported 00:28:08.082 SGL Metadata Address: Not Supported 00:28:08.082 SGL Offset: Supported 00:28:08.082 Transport SGL Data Block: Not Supported 00:28:08.082 Replay Protected Memory Block: Not Supported 00:28:08.082 00:28:08.082 Firmware Slot Information 00:28:08.082 ========================= 00:28:08.082 Active slot: 0 00:28:08.082 00:28:08.082 00:28:08.082 Error Log 00:28:08.082 ========= 00:28:08.082 00:28:08.082 Active Namespaces 00:28:08.082 ================= 00:28:08.082 Discovery Log Page 00:28:08.082 ================== 00:28:08.082 Generation Counter: 2 00:28:08.082 Number of Records: 2 00:28:08.082 Record Format: 0 00:28:08.082 00:28:08.082 Discovery Log Entry 0 00:28:08.082 ---------------------- 00:28:08.082 Transport Type: 3 (TCP) 00:28:08.082 Address Family: 1 (IPv4) 00:28:08.082 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:08.082 Entry Flags: 00:28:08.082 Duplicate Returned Information: 0 00:28:08.082 Explicit Persistent Connection Support for Discovery: 0 00:28:08.082 Transport Requirements: 00:28:08.082 Secure Channel: Not Specified 00:28:08.082 Port ID: 1 (0x0001) 00:28:08.082 Controller ID: 65535 (0xffff) 00:28:08.082 Admin Max SQ Size: 32 00:28:08.082 Transport Service Identifier: 4420 00:28:08.082 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:08.082 Transport Address: 10.0.0.1 00:28:08.082 Discovery Log Entry 1 00:28:08.082 ---------------------- 00:28:08.082 Transport Type: 3 (TCP) 00:28:08.082 Address Family: 1 (IPv4) 00:28:08.082 Subsystem Type: 2 (NVM Subsystem) 00:28:08.082 Entry Flags: 00:28:08.082 Duplicate Returned Information: 0 00:28:08.082 Explicit Persistent Connection Support for Discovery: 0 00:28:08.082 Transport Requirements: 00:28:08.082 Secure Channel: Not Specified 00:28:08.082 Port ID: 1 (0x0001) 00:28:08.082 Controller ID: 65535 (0xffff) 00:28:08.082 Admin Max SQ Size: 32 00:28:08.082 Transport Service Identifier: 4420 00:28:08.082 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:08.082 Transport Address: 10.0.0.1 00:28:08.082 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:08.082 get_feature(0x01) failed 00:28:08.083 get_feature(0x02) failed 00:28:08.083 get_feature(0x04) failed 00:28:08.083 ===================================================== 00:28:08.083 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:08.083 ===================================================== 00:28:08.083 Controller Capabilities/Features 00:28:08.083 ================================ 00:28:08.083 Vendor ID: 0000 00:28:08.083 Subsystem Vendor ID: 0000 00:28:08.083 Serial Number: 7f11cc29bfc92bb9dbf2 00:28:08.083 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:08.083 Firmware Version: 6.8.9-20 00:28:08.083 Recommended Arb Burst: 6 00:28:08.083 IEEE OUI Identifier: 00 00 00 00:28:08.083 Multi-path I/O 00:28:08.083 May have multiple subsystem ports: Yes 00:28:08.083 May have multiple controllers: Yes 00:28:08.083 Associated with SR-IOV VF: No 00:28:08.083 Max Data Transfer Size: Unlimited 00:28:08.083 Max Number of Namespaces: 1024 00:28:08.083 Max Number of I/O Queues: 128 00:28:08.083 NVMe Specification Version (VS): 1.3 00:28:08.083 NVMe Specification Version (Identify): 1.3 00:28:08.083 Maximum Queue Entries: 1024 00:28:08.083 Contiguous Queues Required: No 00:28:08.083 Arbitration Mechanisms Supported 00:28:08.083 Weighted Round Robin: Not Supported 00:28:08.083 Vendor Specific: Not Supported 00:28:08.083 Reset Timeout: 7500 ms 00:28:08.083 Doorbell Stride: 4 bytes 00:28:08.083 NVM Subsystem Reset: Not Supported 00:28:08.083 Command Sets Supported 00:28:08.083 NVM Command Set: Supported 00:28:08.083 Boot Partition: Not Supported 00:28:08.083 Memory Page Size Minimum: 4096 bytes 00:28:08.083 Memory Page Size Maximum: 4096 bytes 00:28:08.083 Persistent Memory Region: Not Supported 00:28:08.083 Optional Asynchronous Events Supported 00:28:08.083 Namespace Attribute Notices: Supported 00:28:08.083 Firmware Activation Notices: Not Supported 00:28:08.083 ANA Change Notices: Supported 00:28:08.083 PLE Aggregate Log Change Notices: Not Supported 00:28:08.083 LBA Status Info Alert Notices: Not Supported 00:28:08.083 EGE Aggregate Log Change Notices: Not Supported 00:28:08.083 Normal NVM Subsystem Shutdown event: Not Supported 00:28:08.083 Zone Descriptor Change Notices: Not Supported 00:28:08.083 Discovery Log Change Notices: Not Supported 00:28:08.083 Controller Attributes 00:28:08.083 128-bit Host Identifier: Supported 00:28:08.083 Non-Operational Permissive Mode: Not Supported 00:28:08.083 NVM Sets: Not Supported 00:28:08.083 Read Recovery Levels: Not Supported 00:28:08.083 Endurance Groups: Not Supported 00:28:08.083 Predictable Latency Mode: Not Supported 00:28:08.083 Traffic Based Keep ALive: Supported 00:28:08.083 Namespace Granularity: Not Supported 00:28:08.083 SQ Associations: Not Supported 00:28:08.083 UUID List: Not Supported 00:28:08.083 Multi-Domain Subsystem: Not Supported 00:28:08.083 Fixed Capacity Management: Not Supported 00:28:08.083 Variable Capacity Management: Not Supported 00:28:08.083 Delete Endurance Group: Not Supported 00:28:08.083 Delete NVM Set: Not Supported 00:28:08.083 Extended LBA Formats Supported: Not Supported 00:28:08.083 Flexible Data Placement Supported: Not Supported 00:28:08.083 00:28:08.083 Controller Memory Buffer Support 00:28:08.083 ================================ 00:28:08.083 Supported: No 00:28:08.083 00:28:08.083 Persistent Memory Region Support 00:28:08.083 ================================ 00:28:08.083 Supported: No 00:28:08.083 00:28:08.083 Admin Command Set Attributes 00:28:08.083 ============================ 00:28:08.083 Security Send/Receive: Not Supported 00:28:08.083 Format NVM: Not Supported 00:28:08.083 Firmware Activate/Download: Not Supported 00:28:08.083 Namespace Management: Not Supported 00:28:08.083 Device Self-Test: Not Supported 00:28:08.083 Directives: Not Supported 00:28:08.083 NVMe-MI: Not Supported 00:28:08.083 Virtualization Management: Not Supported 00:28:08.083 Doorbell Buffer Config: Not Supported 00:28:08.083 Get LBA Status Capability: Not Supported 00:28:08.083 Command & Feature Lockdown Capability: Not Supported 00:28:08.083 Abort Command Limit: 4 00:28:08.083 Async Event Request Limit: 4 00:28:08.083 Number of Firmware Slots: N/A 00:28:08.083 Firmware Slot 1 Read-Only: N/A 00:28:08.083 Firmware Activation Without Reset: N/A 00:28:08.083 Multiple Update Detection Support: N/A 00:28:08.083 Firmware Update Granularity: No Information Provided 00:28:08.083 Per-Namespace SMART Log: Yes 00:28:08.083 Asymmetric Namespace Access Log Page: Supported 00:28:08.083 ANA Transition Time : 10 sec 00:28:08.083 00:28:08.083 Asymmetric Namespace Access Capabilities 00:28:08.083 ANA Optimized State : Supported 00:28:08.083 ANA Non-Optimized State : Supported 00:28:08.083 ANA Inaccessible State : Supported 00:28:08.083 ANA Persistent Loss State : Supported 00:28:08.083 ANA Change State : Supported 00:28:08.083 ANAGRPID is not changed : No 00:28:08.083 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:08.083 00:28:08.083 ANA Group Identifier Maximum : 128 00:28:08.083 Number of ANA Group Identifiers : 128 00:28:08.083 Max Number of Allowed Namespaces : 1024 00:28:08.083 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:08.083 Command Effects Log Page: Supported 00:28:08.083 Get Log Page Extended Data: Supported 00:28:08.083 Telemetry Log Pages: Not Supported 00:28:08.083 Persistent Event Log Pages: Not Supported 00:28:08.083 Supported Log Pages Log Page: May Support 00:28:08.083 Commands Supported & Effects Log Page: Not Supported 00:28:08.083 Feature Identifiers & Effects Log Page:May Support 00:28:08.083 NVMe-MI Commands & Effects Log Page: May Support 00:28:08.083 Data Area 4 for Telemetry Log: Not Supported 00:28:08.083 Error Log Page Entries Supported: 128 00:28:08.083 Keep Alive: Supported 00:28:08.083 Keep Alive Granularity: 1000 ms 00:28:08.083 00:28:08.083 NVM Command Set Attributes 00:28:08.083 ========================== 00:28:08.083 Submission Queue Entry Size 00:28:08.083 Max: 64 00:28:08.083 Min: 64 00:28:08.083 Completion Queue Entry Size 00:28:08.083 Max: 16 00:28:08.083 Min: 16 00:28:08.083 Number of Namespaces: 1024 00:28:08.083 Compare Command: Not Supported 00:28:08.083 Write Uncorrectable Command: Not Supported 00:28:08.083 Dataset Management Command: Supported 00:28:08.083 Write Zeroes Command: Supported 00:28:08.083 Set Features Save Field: Not Supported 00:28:08.083 Reservations: Not Supported 00:28:08.083 Timestamp: Not Supported 00:28:08.083 Copy: Not Supported 00:28:08.083 Volatile Write Cache: Present 00:28:08.083 Atomic Write Unit (Normal): 1 00:28:08.083 Atomic Write Unit (PFail): 1 00:28:08.083 Atomic Compare & Write Unit: 1 00:28:08.083 Fused Compare & Write: Not Supported 00:28:08.083 Scatter-Gather List 00:28:08.083 SGL Command Set: Supported 00:28:08.083 SGL Keyed: Not Supported 00:28:08.083 SGL Bit Bucket Descriptor: Not Supported 00:28:08.083 SGL Metadata Pointer: Not Supported 00:28:08.083 Oversized SGL: Not Supported 00:28:08.083 SGL Metadata Address: Not Supported 00:28:08.083 SGL Offset: Supported 00:28:08.083 Transport SGL Data Block: Not Supported 00:28:08.083 Replay Protected Memory Block: Not Supported 00:28:08.083 00:28:08.083 Firmware Slot Information 00:28:08.083 ========================= 00:28:08.083 Active slot: 0 00:28:08.083 00:28:08.083 Asymmetric Namespace Access 00:28:08.083 =========================== 00:28:08.083 Change Count : 0 00:28:08.083 Number of ANA Group Descriptors : 1 00:28:08.083 ANA Group Descriptor : 0 00:28:08.083 ANA Group ID : 1 00:28:08.083 Number of NSID Values : 1 00:28:08.083 Change Count : 0 00:28:08.083 ANA State : 1 00:28:08.083 Namespace Identifier : 1 00:28:08.083 00:28:08.083 Commands Supported and Effects 00:28:08.083 ============================== 00:28:08.083 Admin Commands 00:28:08.083 -------------- 00:28:08.083 Get Log Page (02h): Supported 00:28:08.083 Identify (06h): Supported 00:28:08.083 Abort (08h): Supported 00:28:08.083 Set Features (09h): Supported 00:28:08.083 Get Features (0Ah): Supported 00:28:08.083 Asynchronous Event Request (0Ch): Supported 00:28:08.083 Keep Alive (18h): Supported 00:28:08.083 I/O Commands 00:28:08.083 ------------ 00:28:08.083 Flush (00h): Supported 00:28:08.083 Write (01h): Supported LBA-Change 00:28:08.083 Read (02h): Supported 00:28:08.083 Write Zeroes (08h): Supported LBA-Change 00:28:08.083 Dataset Management (09h): Supported 00:28:08.083 00:28:08.083 Error Log 00:28:08.083 ========= 00:28:08.083 Entry: 0 00:28:08.083 Error Count: 0x3 00:28:08.083 Submission Queue Id: 0x0 00:28:08.083 Command Id: 0x5 00:28:08.083 Phase Bit: 0 00:28:08.084 Status Code: 0x2 00:28:08.084 Status Code Type: 0x0 00:28:08.084 Do Not Retry: 1 00:28:08.084 Error Location: 0x28 00:28:08.084 LBA: 0x0 00:28:08.084 Namespace: 0x0 00:28:08.084 Vendor Log Page: 0x0 00:28:08.084 ----------- 00:28:08.084 Entry: 1 00:28:08.084 Error Count: 0x2 00:28:08.084 Submission Queue Id: 0x0 00:28:08.084 Command Id: 0x5 00:28:08.084 Phase Bit: 0 00:28:08.084 Status Code: 0x2 00:28:08.084 Status Code Type: 0x0 00:28:08.084 Do Not Retry: 1 00:28:08.084 Error Location: 0x28 00:28:08.084 LBA: 0x0 00:28:08.084 Namespace: 0x0 00:28:08.084 Vendor Log Page: 0x0 00:28:08.084 ----------- 00:28:08.084 Entry: 2 00:28:08.084 Error Count: 0x1 00:28:08.084 Submission Queue Id: 0x0 00:28:08.084 Command Id: 0x4 00:28:08.084 Phase Bit: 0 00:28:08.084 Status Code: 0x2 00:28:08.084 Status Code Type: 0x0 00:28:08.084 Do Not Retry: 1 00:28:08.084 Error Location: 0x28 00:28:08.084 LBA: 0x0 00:28:08.084 Namespace: 0x0 00:28:08.084 Vendor Log Page: 0x0 00:28:08.084 00:28:08.084 Number of Queues 00:28:08.084 ================ 00:28:08.084 Number of I/O Submission Queues: 128 00:28:08.084 Number of I/O Completion Queues: 128 00:28:08.084 00:28:08.084 ZNS Specific Controller Data 00:28:08.084 ============================ 00:28:08.084 Zone Append Size Limit: 0 00:28:08.084 00:28:08.084 00:28:08.084 Active Namespaces 00:28:08.084 ================= 00:28:08.084 get_feature(0x05) failed 00:28:08.084 Namespace ID:1 00:28:08.084 Command Set Identifier: NVM (00h) 00:28:08.084 Deallocate: Supported 00:28:08.084 Deallocated/Unwritten Error: Not Supported 00:28:08.084 Deallocated Read Value: Unknown 00:28:08.084 Deallocate in Write Zeroes: Not Supported 00:28:08.084 Deallocated Guard Field: 0xFFFF 00:28:08.084 Flush: Supported 00:28:08.084 Reservation: Not Supported 00:28:08.084 Namespace Sharing Capabilities: Multiple Controllers 00:28:08.084 Size (in LBAs): 3750748848 (1788GiB) 00:28:08.084 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:08.084 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:08.084 UUID: 9d0a0082-54c9-45cd-b186-9bb8d294d644 00:28:08.084 Thin Provisioning: Not Supported 00:28:08.084 Per-NS Atomic Units: Yes 00:28:08.084 Atomic Write Unit (Normal): 8 00:28:08.084 Atomic Write Unit (PFail): 8 00:28:08.084 Preferred Write Granularity: 8 00:28:08.084 Atomic Compare & Write Unit: 8 00:28:08.084 Atomic Boundary Size (Normal): 0 00:28:08.084 Atomic Boundary Size (PFail): 0 00:28:08.084 Atomic Boundary Offset: 0 00:28:08.084 NGUID/EUI64 Never Reused: No 00:28:08.084 ANA group ID: 1 00:28:08.084 Namespace Write Protected: No 00:28:08.084 Number of LBA Formats: 1 00:28:08.084 Current LBA Format: LBA Format #00 00:28:08.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:08.084 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.084 rmmod nvme_tcp 00:28:08.084 rmmod nvme_fabrics 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.084 15:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:10.629 15:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:13.928 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:13.928 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:14.189 00:28:14.189 real 0m19.275s 00:28:14.189 user 0m5.172s 00:28:14.189 sys 0m11.117s 00:28:14.189 15:38:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.189 15:38:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.189 ************************************ 00:28:14.189 END TEST nvmf_identify_kernel_target 00:28:14.189 ************************************ 00:28:14.189 15:38:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:14.189 15:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:14.189 15:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.189 15:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.452 ************************************ 00:28:14.452 START TEST nvmf_auth_host 00:28:14.452 ************************************ 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:14.452 * Looking for test storage... 00:28:14.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.452 --rc genhtml_branch_coverage=1 00:28:14.452 --rc genhtml_function_coverage=1 00:28:14.452 --rc genhtml_legend=1 00:28:14.452 --rc geninfo_all_blocks=1 00:28:14.452 --rc geninfo_unexecuted_blocks=1 00:28:14.452 00:28:14.452 ' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.452 --rc genhtml_branch_coverage=1 00:28:14.452 --rc genhtml_function_coverage=1 00:28:14.452 --rc genhtml_legend=1 00:28:14.452 --rc geninfo_all_blocks=1 00:28:14.452 --rc geninfo_unexecuted_blocks=1 00:28:14.452 00:28:14.452 ' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.452 --rc genhtml_branch_coverage=1 00:28:14.452 --rc genhtml_function_coverage=1 00:28:14.452 --rc genhtml_legend=1 00:28:14.452 --rc geninfo_all_blocks=1 00:28:14.452 --rc geninfo_unexecuted_blocks=1 00:28:14.452 00:28:14.452 ' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.452 --rc genhtml_branch_coverage=1 00:28:14.452 --rc genhtml_function_coverage=1 00:28:14.452 --rc genhtml_legend=1 00:28:14.452 --rc geninfo_all_blocks=1 00:28:14.452 --rc geninfo_unexecuted_blocks=1 00:28:14.452 00:28:14.452 ' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.452 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:14.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.453 15:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:22.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:22.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:22.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:22.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:28:22.590 00:28:22.590 --- 10.0.0.2 ping statistics --- 00:28:22.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.590 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:28:22.590 00:28:22.590 --- 10.0.0.1 ping statistics --- 00:28:22.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.590 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=756952 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 756952 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 756952 ']' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.590 15:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.590 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.590 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:22.590 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.590 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.590 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1589fe301b6bff848a294c40e30308bb 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.y2k 00:28:22.850 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1589fe301b6bff848a294c40e30308bb 0 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1589fe301b6bff848a294c40e30308bb 0 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1589fe301b6bff848a294c40e30308bb 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.y2k 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.y2k 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.y2k 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=612bca89702205a9a124fa6c49d59588a87bf3422d82c3758978613a9297c35d 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Oat 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 612bca89702205a9a124fa6c49d59588a87bf3422d82c3758978613a9297c35d 3 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 612bca89702205a9a124fa6c49d59588a87bf3422d82c3758978613a9297c35d 3 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=612bca89702205a9a124fa6c49d59588a87bf3422d82c3758978613a9297c35d 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Oat 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Oat 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Oat 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e31e97e1d298ad4c25943cc9c63a924dde17c2ef7597a09a 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.U3X 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e31e97e1d298ad4c25943cc9c63a924dde17c2ef7597a09a 0 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e31e97e1d298ad4c25943cc9c63a924dde17c2ef7597a09a 0 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e31e97e1d298ad4c25943cc9c63a924dde17c2ef7597a09a 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.U3X 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.U3X 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.U3X 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=164433bb416eddea10b8a0c804bc2795d48098aa67dccd6b 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XWn 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 164433bb416eddea10b8a0c804bc2795d48098aa67dccd6b 2 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 164433bb416eddea10b8a0c804bc2795d48098aa67dccd6b 2 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=164433bb416eddea10b8a0c804bc2795d48098aa67dccd6b 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:22.851 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XWn 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XWn 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XWn 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d71b08c3c7752b43c4d0f4bc55ea93f0 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9N5 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d71b08c3c7752b43c4d0f4bc55ea93f0 1 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d71b08c3c7752b43c4d0f4bc55ea93f0 1 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d71b08c3c7752b43c4d0f4bc55ea93f0 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9N5 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9N5 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9N5 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=de6011d4ffa30b0a4de13116b5e910e0 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ko7 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key de6011d4ffa30b0a4de13116b5e910e0 1 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 de6011d4ffa30b0a4de13116b5e910e0 1 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.112 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=de6011d4ffa30b0a4de13116b5e910e0 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ko7 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ko7 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ko7 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=77c6e0512c5dc9c8e12a9860be78ee695f554ab13759b114 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.o37 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 77c6e0512c5dc9c8e12a9860be78ee695f554ab13759b114 2 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 77c6e0512c5dc9c8e12a9860be78ee695f554ab13759b114 2 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=77c6e0512c5dc9c8e12a9860be78ee695f554ab13759b114 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:23.113 15:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.o37 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.o37 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.o37 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5987b24291dba02e0cf97798e34db535 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Cib 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5987b24291dba02e0cf97798e34db535 0 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5987b24291dba02e0cf97798e34db535 0 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5987b24291dba02e0cf97798e34db535 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:23.113 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Cib 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Cib 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Cib 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=be1b31016a203bf42a7997de5e34efd04a7e8c1a34038fd41e8d2fe413e6e6a7 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H3N 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key be1b31016a203bf42a7997de5e34efd04a7e8c1a34038fd41e8d2fe413e6e6a7 3 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 be1b31016a203bf42a7997de5e34efd04a7e8c1a34038fd41e8d2fe413e6e6a7 3 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=be1b31016a203bf42a7997de5e34efd04a7e8c1a34038fd41e8d2fe413e6e6a7 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H3N 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H3N 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.H3N 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 756952 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 756952 ']' 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.374 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.y2k 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Oat ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Oat 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.U3X 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XWn ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XWn 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9N5 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ko7 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ko7 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.o37 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Cib ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Cib 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.H3N 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:23.636 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:23.637 15:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:26.938 Waiting for block devices as requested 00:28:27.197 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:27.197 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:27.197 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:27.457 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:27.457 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:27.457 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:27.716 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:27.716 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:27.716 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:27.976 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:27.977 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:27.977 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:28.237 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:28.237 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:28.237 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:28.237 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:28.496 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:29.435 No valid GPT data, bailing 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:29.435 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:29.435 00:28:29.435 Discovery Log Number of Records 2, Generation counter 2 00:28:29.435 =====Discovery Log Entry 0====== 00:28:29.435 trtype: tcp 00:28:29.435 adrfam: ipv4 00:28:29.435 subtype: current discovery subsystem 00:28:29.436 treq: not specified, sq flow control disable supported 00:28:29.436 portid: 1 00:28:29.436 trsvcid: 4420 00:28:29.436 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:29.436 traddr: 10.0.0.1 00:28:29.436 eflags: none 00:28:29.436 sectype: none 00:28:29.436 =====Discovery Log Entry 1====== 00:28:29.436 trtype: tcp 00:28:29.436 adrfam: ipv4 00:28:29.436 subtype: nvme subsystem 00:28:29.436 treq: not specified, sq flow control disable supported 00:28:29.436 portid: 1 00:28:29.436 trsvcid: 4420 00:28:29.436 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:29.436 traddr: 10.0.0.1 00:28:29.436 eflags: none 00:28:29.436 sectype: none 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.436 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.696 nvme0n1 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.696 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.957 nvme0n1 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.957 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.958 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.219 nvme0n1 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.219 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.220 15:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.220 nvme0n1 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.220 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 nvme0n1 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.481 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.742 nvme0n1 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.742 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.003 nvme0n1 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.003 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.264 15:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.264 nvme0n1 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.264 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.524 nvme0n1 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.524 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.525 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.785 nvme0n1 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.785 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.786 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.046 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.046 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.046 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.047 nvme0n1 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.047 15:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:32.047 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.307 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.307 nvme0n1 00:28:32.308 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.308 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.308 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.308 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.308 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.568 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.829 nvme0n1 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.829 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.089 nvme0n1 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:33.089 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.090 15:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.349 nvme0n1 00:28:33.349 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.349 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.349 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.349 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.349 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.349 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.610 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.870 nvme0n1 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.870 15:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.441 nvme0n1 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:34.441 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.442 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.702 nvme0n1 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.703 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.964 15:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.225 nvme0n1 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.225 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.795 nvme0n1 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.795 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.796 15:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.368 nvme0n1 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.368 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.369 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.938 nvme0n1 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.938 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.939 15:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.508 nvme0n1 00:28:37.508 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.508 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.508 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.508 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.508 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.769 15:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.340 nvme0n1 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.340 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.280 nvme0n1 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.280 15:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.280 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.281 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.281 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.281 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.281 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.281 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.852 nvme0n1 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.852 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.113 nvme0n1 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.113 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.114 15:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.374 nvme0n1 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.374 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.375 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.635 nvme0n1 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.635 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.636 nvme0n1 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.636 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.896 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.897 nvme0n1 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.897 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.158 15:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.158 nvme0n1 00:28:41.158 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.158 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.158 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.158 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.158 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.158 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 nvme0n1 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.680 nvme0n1 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.680 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.940 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.941 nvme0n1 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.941 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.201 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.202 15:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.202 nvme0n1 00:28:42.202 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.202 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.202 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.202 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.202 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.202 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.462 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.723 nvme0n1 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.723 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.724 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.984 nvme0n1 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.984 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.985 15:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.245 nvme0n1 00:28:43.245 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.245 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.245 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.245 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.245 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.245 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.506 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.766 nvme0n1 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:43.766 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.767 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.026 nvme0n1 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.026 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.027 15:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 nvme0n1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.601 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 nvme0n1 00:28:44.931 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.931 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.931 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.931 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.931 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:45.209 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.210 15:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.500 nvme0n1 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.500 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.501 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.070 nvme0n1 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.070 15:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.641 nvme0n1 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.641 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.642 15:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.213 nvme0n1 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.213 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.214 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.154 nvme0n1 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.155 15:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.726 nvme0n1 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.726 15:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.296 nvme0n1 00:28:49.296 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.296 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.296 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.296 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.296 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.296 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.557 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.128 nvme0n1 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.128 15:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.128 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.129 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.129 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.129 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.129 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.129 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.129 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.388 nvme0n1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.388 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.648 nvme0n1 00:28:50.648 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.648 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.649 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 nvme0n1 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.910 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.911 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.172 nvme0n1 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.172 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.173 15:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.173 nvme0n1 00:28:51.173 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.173 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.173 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.173 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.173 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.173 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.434 nvme0n1 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.434 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.695 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 nvme0n1 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.957 nvme0n1 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.957 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.218 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.219 15:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.219 nvme0n1 00:28:52.219 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.219 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.219 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.219 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.219 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:52.480 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.481 nvme0n1 00:28:52.481 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.742 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.003 nvme0n1 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.003 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.004 15:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.265 nvme0n1 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.265 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.526 nvme0n1 00:28:53.526 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.526 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.526 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.526 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.526 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.526 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:53.787 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.788 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.048 nvme0n1 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:54.048 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.049 15:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.310 nvme0n1 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.310 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.881 nvme0n1 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:54.881 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.882 15:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.455 nvme0n1 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.455 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.717 nvme0n1 00:28:55.717 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.717 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.717 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.717 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.717 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.717 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.978 15:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.240 nvme0n1 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.240 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:56.501 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.502 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.762 nvme0n1 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:56.762 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU4OWZlMzAxYjZiZmY4NDhhMjk0YzQwZTMwMzA4YmLluIv1: 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: ]] 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjEyYmNhODk3MDIyMDVhOWExMjRmYTZjNDlkNTk1ODhhODdiZjM0MjJkODJjMzc1ODk3ODYxM2E5Mjk3YzM1ZGqQOg8=: 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.763 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.024 15:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.594 nvme0n1 00:28:57.594 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.594 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.594 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.594 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.594 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.595 15:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.166 nvme0n1 00:28:58.166 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.166 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.166 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.166 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.166 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.166 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.426 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.997 nvme0n1 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdjNmUwNTEyYzVkYzljOGUxMmE5ODYwYmU3OGVlNjk1ZjU1NGFiMTM3NTliMTE0d+RWHQ==: 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: ]] 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk4N2IyNDI5MWRiYTAyZTBjZjk3Nzk4ZTM0ZGI1MzW8nQ+9: 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.997 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.998 15:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.941 nvme0n1 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmUxYjMxMDE2YTIwM2JmNDJhNzk5N2RlNWUzNGVmZDA0YTdlOGMxYTM0MDM4ZmQ0MWU4ZDJmZTQxM2U2ZTZhN11PUVE=: 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.941 15:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.512 nvme0n1 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.512 request: 00:29:00.512 { 00:29:00.512 "name": "nvme0", 00:29:00.512 "trtype": "tcp", 00:29:00.512 "traddr": "10.0.0.1", 00:29:00.512 "adrfam": "ipv4", 00:29:00.512 "trsvcid": "4420", 00:29:00.512 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:00.512 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:00.512 "prchk_reftag": false, 00:29:00.512 "prchk_guard": false, 00:29:00.512 "hdgst": false, 00:29:00.512 "ddgst": false, 00:29:00.512 "allow_unrecognized_csi": false, 00:29:00.512 "method": "bdev_nvme_attach_controller", 00:29:00.512 "req_id": 1 00:29:00.512 } 00:29:00.512 Got JSON-RPC error response 00:29:00.512 response: 00:29:00.512 { 00:29:00.512 "code": -5, 00:29:00.512 "message": "Input/output error" 00:29:00.512 } 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.512 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.773 request: 00:29:00.773 { 00:29:00.773 "name": "nvme0", 00:29:00.773 "trtype": "tcp", 00:29:00.773 "traddr": "10.0.0.1", 00:29:00.773 "adrfam": "ipv4", 00:29:00.773 "trsvcid": "4420", 00:29:00.773 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:00.773 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:00.773 "prchk_reftag": false, 00:29:00.773 "prchk_guard": false, 00:29:00.773 "hdgst": false, 00:29:00.773 "ddgst": false, 00:29:00.774 "dhchap_key": "key2", 00:29:00.774 "allow_unrecognized_csi": false, 00:29:00.774 "method": "bdev_nvme_attach_controller", 00:29:00.774 "req_id": 1 00:29:00.774 } 00:29:00.774 Got JSON-RPC error response 00:29:00.774 response: 00:29:00.774 { 00:29:00.774 "code": -5, 00:29:00.774 "message": "Input/output error" 00:29:00.774 } 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.774 request: 00:29:00.774 { 00:29:00.774 "name": "nvme0", 00:29:00.774 "trtype": "tcp", 00:29:00.774 "traddr": "10.0.0.1", 00:29:00.774 "adrfam": "ipv4", 00:29:00.774 "trsvcid": "4420", 00:29:00.774 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:00.774 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:00.774 "prchk_reftag": false, 00:29:00.774 "prchk_guard": false, 00:29:00.774 "hdgst": false, 00:29:00.774 "ddgst": false, 00:29:00.774 "dhchap_key": "key1", 00:29:00.774 "dhchap_ctrlr_key": "ckey2", 00:29:00.774 "allow_unrecognized_csi": false, 00:29:00.774 "method": "bdev_nvme_attach_controller", 00:29:00.774 "req_id": 1 00:29:00.774 } 00:29:00.774 Got JSON-RPC error response 00:29:00.774 response: 00:29:00.774 { 00:29:00.774 "code": -5, 00:29:00.774 "message": "Input/output error" 00:29:00.774 } 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.774 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.036 nvme0n1 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.036 request: 00:29:01.036 { 00:29:01.036 "name": "nvme0", 00:29:01.036 "dhchap_key": "key1", 00:29:01.036 "dhchap_ctrlr_key": "ckey2", 00:29:01.036 "method": "bdev_nvme_set_keys", 00:29:01.036 "req_id": 1 00:29:01.036 } 00:29:01.036 Got JSON-RPC error response 00:29:01.036 response: 00:29:01.036 { 00:29:01.036 "code": -13, 00:29:01.036 "message": "Permission denied" 00:29:01.036 } 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.036 15:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.298 15:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.298 15:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:01.298 15:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:02.238 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.238 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:02.238 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.238 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.238 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.239 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:02.239 15:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZTk3ZTFkMjk4YWQ0YzI1OTQzY2M5YzYzYTkyNGRkZTE3YzJlZjc1OTdhMDlhWp7ZYw==: 00:29:03.180 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: ]] 00:29:03.181 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY0NDMzYmI0MTZlZGRlYTEwYjhhMGM4MDRiYzI3OTVkNDgwOThhYTY3ZGNjZDZiUq1dVw==: 00:29:03.181 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.441 nvme0n1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxYjA4YzNjNzc1MmI0M2M0ZDBmNGJjNTVlYTkzZjD4HRHH: 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGU2MDExZDRmZmEzMGIwYTRkZTEzMTE2YjVlOTEwZTD6/IIz: 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.441 request: 00:29:03.441 { 00:29:03.441 "name": "nvme0", 00:29:03.441 "dhchap_key": "key2", 00:29:03.441 "dhchap_ctrlr_key": "ckey1", 00:29:03.441 "method": "bdev_nvme_set_keys", 00:29:03.441 "req_id": 1 00:29:03.441 } 00:29:03.441 Got JSON-RPC error response 00:29:03.441 response: 00:29:03.441 { 00:29:03.441 "code": -13, 00:29:03.441 "message": "Permission denied" 00:29:03.441 } 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.441 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.701 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:03.701 15:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.642 rmmod nvme_tcp 00:29:04.642 rmmod nvme_fabrics 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 756952 ']' 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 756952 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 756952 ']' 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 756952 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756952 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756952' 00:29:04.642 killing process with pid 756952 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 756952 00:29:04.642 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 756952 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.902 15:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.818 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.818 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:06.818 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:06.818 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:06.818 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:06.818 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:07.079 15:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:10.389 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:10.655 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:11.226 15:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.y2k /tmp/spdk.key-null.U3X /tmp/spdk.key-sha256.9N5 /tmp/spdk.key-sha384.o37 /tmp/spdk.key-sha512.H3N /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:11.226 15:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:14.532 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:14.532 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:14.532 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:15.103 00:29:15.103 real 1m0.663s 00:29:15.103 user 0m54.496s 00:29:15.103 sys 0m16.078s 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.103 ************************************ 00:29:15.103 END TEST nvmf_auth_host 00:29:15.103 ************************************ 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.103 ************************************ 00:29:15.103 START TEST nvmf_digest 00:29:15.103 ************************************ 00:29:15.103 15:39:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:15.103 * Looking for test storage... 00:29:15.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.103 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.103 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.103 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.364 --rc genhtml_branch_coverage=1 00:29:15.364 --rc genhtml_function_coverage=1 00:29:15.364 --rc genhtml_legend=1 00:29:15.364 --rc geninfo_all_blocks=1 00:29:15.364 --rc geninfo_unexecuted_blocks=1 00:29:15.364 00:29:15.364 ' 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.364 --rc genhtml_branch_coverage=1 00:29:15.364 --rc genhtml_function_coverage=1 00:29:15.364 --rc genhtml_legend=1 00:29:15.364 --rc geninfo_all_blocks=1 00:29:15.364 --rc geninfo_unexecuted_blocks=1 00:29:15.364 00:29:15.364 ' 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.364 --rc genhtml_branch_coverage=1 00:29:15.364 --rc genhtml_function_coverage=1 00:29:15.364 --rc genhtml_legend=1 00:29:15.364 --rc geninfo_all_blocks=1 00:29:15.364 --rc geninfo_unexecuted_blocks=1 00:29:15.364 00:29:15.364 ' 00:29:15.364 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.364 --rc genhtml_branch_coverage=1 00:29:15.364 --rc genhtml_function_coverage=1 00:29:15.364 --rc genhtml_legend=1 00:29:15.364 --rc geninfo_all_blocks=1 00:29:15.365 --rc geninfo_unexecuted_blocks=1 00:29:15.365 00:29:15.365 ' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:15.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.365 15:39:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:23.509 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:23.509 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:23.509 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:23.509 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:23.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:29:23.510 00:29:23.510 --- 10.0.0.2 ping statistics --- 00:29:23.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.510 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:29:23.510 00:29:23.510 --- 10.0.0.1 ping statistics --- 00:29:23.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.510 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.510 ************************************ 00:29:23.510 START TEST nvmf_digest_clean 00:29:23.510 ************************************ 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=774582 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 774582 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 774582 ']' 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.510 15:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.510 [2024-11-20 15:39:11.853450] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:23.510 [2024-11-20 15:39:11.853512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.510 [2024-11-20 15:39:11.953877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.511 [2024-11-20 15:39:12.004553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.511 [2024-11-20 15:39:12.004606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.511 [2024-11-20 15:39:12.004615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.511 [2024-11-20 15:39:12.004622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.511 [2024-11-20 15:39:12.004629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.511 [2024-11-20 15:39:12.005444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.771 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.031 null0 00:29:24.031 [2024-11-20 15:39:12.820119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.031 [2024-11-20 15:39:12.844448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=774697 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 774697 /var/tmp/bperf.sock 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 774697 ']' 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.031 15:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.031 [2024-11-20 15:39:12.904310] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:24.031 [2024-11-20 15:39:12.904377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774697 ] 00:29:24.292 [2024-11-20 15:39:12.998805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.292 [2024-11-20 15:39:13.051665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.865 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.865 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:24.865 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:24.865 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:24.865 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:25.125 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.125 15:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.694 nvme0n1 00:29:25.694 15:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:25.694 15:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.694 Running I/O for 2 seconds... 00:29:27.573 19405.00 IOPS, 75.80 MiB/s [2024-11-20T14:39:16.533Z] 20731.00 IOPS, 80.98 MiB/s 00:29:27.573 Latency(us) 00:29:27.573 [2024-11-20T14:39:16.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.573 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:27.573 nvme0n1 : 2.00 20760.92 81.10 0.00 0.00 6159.18 2157.23 23156.05 00:29:27.573 [2024-11-20T14:39:16.533Z] =================================================================================================================== 00:29:27.573 [2024-11-20T14:39:16.533Z] Total : 20760.92 81.10 0.00 0.00 6159.18 2157.23 23156.05 00:29:27.573 { 00:29:27.573 "results": [ 00:29:27.573 { 00:29:27.573 "job": "nvme0n1", 00:29:27.573 "core_mask": "0x2", 00:29:27.573 "workload": "randread", 00:29:27.574 "status": "finished", 00:29:27.574 "queue_depth": 128, 00:29:27.574 "io_size": 4096, 00:29:27.574 "runtime": 2.003283, 00:29:27.574 "iops": 20760.920948263425, 00:29:27.574 "mibps": 81.097347454154, 00:29:27.574 "io_failed": 0, 00:29:27.574 "io_timeout": 0, 00:29:27.574 "avg_latency_us": 6159.1752581550045, 00:29:27.574 "min_latency_us": 2157.2266666666665, 00:29:27.574 "max_latency_us": 23156.053333333333 00:29:27.574 } 00:29:27.574 ], 00:29:27.574 "core_count": 1 00:29:27.574 } 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:27.834 | select(.opcode=="crc32c") 00:29:27.834 | "\(.module_name) \(.executed)"' 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 774697 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 774697 ']' 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 774697 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774697 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774697' 00:29:27.834 killing process with pid 774697 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 774697 00:29:27.834 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.834 00:29:27.834 Latency(us) 00:29:27.834 [2024-11-20T14:39:16.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.834 [2024-11-20T14:39:16.794Z] =================================================================================================================== 00:29:27.834 [2024-11-20T14:39:16.794Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.834 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 774697 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=775475 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 775475 /var/tmp/bperf.sock 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 775475 ']' 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.094 15:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.094 [2024-11-20 15:39:16.936167] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:28.094 [2024-11-20 15:39:16.936227] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775475 ] 00:29:28.094 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.094 Zero copy mechanism will not be used. 00:29:28.094 [2024-11-20 15:39:17.017583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.094 [2024-11-20 15:39:17.047283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.035 15:39:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.606 nvme0n1 00:29:29.606 15:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:29.606 15:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.606 Zero copy mechanism will not be used. 00:29:29.606 Running I/O for 2 seconds... 00:29:31.936 3294.00 IOPS, 411.75 MiB/s [2024-11-20T14:39:20.896Z] 3613.50 IOPS, 451.69 MiB/s 00:29:31.936 Latency(us) 00:29:31.936 [2024-11-20T14:39:20.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.936 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:31.936 nvme0n1 : 2.00 3618.14 452.27 0.00 0.00 4419.00 549.55 14199.47 00:29:31.936 [2024-11-20T14:39:20.897Z] =================================================================================================================== 00:29:31.937 [2024-11-20T14:39:20.897Z] Total : 3618.14 452.27 0.00 0.00 4419.00 549.55 14199.47 00:29:31.937 { 00:29:31.937 "results": [ 00:29:31.937 { 00:29:31.937 "job": "nvme0n1", 00:29:31.937 "core_mask": "0x2", 00:29:31.937 "workload": "randread", 00:29:31.937 "status": "finished", 00:29:31.937 "queue_depth": 16, 00:29:31.937 "io_size": 131072, 00:29:31.937 "runtime": 2.001858, 00:29:31.937 "iops": 3618.138749102084, 00:29:31.937 "mibps": 452.2673436377605, 00:29:31.937 "io_failed": 0, 00:29:31.937 "io_timeout": 0, 00:29:31.937 "avg_latency_us": 4418.998035804685, 00:29:31.937 "min_latency_us": 549.5466666666666, 00:29:31.937 "max_latency_us": 14199.466666666667 00:29:31.937 } 00:29:31.937 ], 00:29:31.937 "core_count": 1 00:29:31.937 } 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:31.937 | select(.opcode=="crc32c") 00:29:31.937 | "\(.module_name) \(.executed)"' 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 775475 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 775475 ']' 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 775475 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775475 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775475' 00:29:31.937 killing process with pid 775475 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 775475 00:29:31.937 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.937 00:29:31.937 Latency(us) 00:29:31.937 [2024-11-20T14:39:20.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.937 [2024-11-20T14:39:20.897Z] =================================================================================================================== 00:29:31.937 [2024-11-20T14:39:20.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 775475 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=776345 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 776345 /var/tmp/bperf.sock 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 776345 ']' 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.937 15:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.937 [2024-11-20 15:39:20.877629] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:31.937 [2024-11-20 15:39:20.877691] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776345 ] 00:29:32.198 [2024-11-20 15:39:20.940773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.198 [2024-11-20 15:39:20.969987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.198 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.198 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:32.198 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:32.198 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:32.198 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:32.464 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.464 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.758 nvme0n1 00:29:32.758 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:32.758 15:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.068 Running I/O for 2 seconds... 00:29:34.970 30280.00 IOPS, 118.28 MiB/s [2024-11-20T14:39:23.930Z] 30458.50 IOPS, 118.98 MiB/s 00:29:34.970 Latency(us) 00:29:34.970 [2024-11-20T14:39:23.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.970 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.970 nvme0n1 : 2.00 30461.18 118.99 0.00 0.00 4196.89 2184.53 16056.32 00:29:34.970 [2024-11-20T14:39:23.930Z] =================================================================================================================== 00:29:34.970 [2024-11-20T14:39:23.930Z] Total : 30461.18 118.99 0.00 0.00 4196.89 2184.53 16056.32 00:29:34.970 { 00:29:34.970 "results": [ 00:29:34.970 { 00:29:34.970 "job": "nvme0n1", 00:29:34.970 "core_mask": "0x2", 00:29:34.970 "workload": "randwrite", 00:29:34.970 "status": "finished", 00:29:34.970 "queue_depth": 128, 00:29:34.970 "io_size": 4096, 00:29:34.970 "runtime": 2.004026, 00:29:34.970 "iops": 30461.18164135595, 00:29:34.970 "mibps": 118.98899078654668, 00:29:34.970 "io_failed": 0, 00:29:34.970 "io_timeout": 0, 00:29:34.970 "avg_latency_us": 4196.892754306932, 00:29:34.970 "min_latency_us": 2184.5333333333333, 00:29:34.970 "max_latency_us": 16056.32 00:29:34.970 } 00:29:34.970 ], 00:29:34.970 "core_count": 1 00:29:34.970 } 00:29:34.970 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:34.970 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:34.970 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:34.970 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:34.970 | select(.opcode=="crc32c") 00:29:34.970 | "\(.module_name) \(.executed)"' 00:29:34.970 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 776345 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 776345 ']' 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 776345 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.232 15:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776345 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776345' 00:29:35.232 killing process with pid 776345 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 776345 00:29:35.232 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.232 00:29:35.232 Latency(us) 00:29:35.232 [2024-11-20T14:39:24.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.232 [2024-11-20T14:39:24.192Z] =================================================================================================================== 00:29:35.232 [2024-11-20T14:39:24.192Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 776345 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:35.232 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=776920 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 776920 /var/tmp/bperf.sock 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 776920 ']' 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.233 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.233 [2024-11-20 15:39:24.181405] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:35.233 [2024-11-20 15:39:24.181462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776920 ] 00:29:35.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.233 Zero copy mechanism will not be used. 00:29:35.514 [2024-11-20 15:39:24.263551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.514 [2024-11-20 15:39:24.293138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.084 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.084 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:36.084 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:36.084 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:36.084 15:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:36.345 15:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.345 15:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.604 nvme0n1 00:29:36.604 15:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:36.604 15:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.604 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:36.604 Zero copy mechanism will not be used. 00:29:36.604 Running I/O for 2 seconds... 00:29:38.930 3171.00 IOPS, 396.38 MiB/s [2024-11-20T14:39:27.890Z] 3876.50 IOPS, 484.56 MiB/s 00:29:38.930 Latency(us) 00:29:38.930 [2024-11-20T14:39:27.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.930 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:38.930 nvme0n1 : 2.00 3879.22 484.90 0.00 0.00 4120.23 1570.13 8519.68 00:29:38.930 [2024-11-20T14:39:27.890Z] =================================================================================================================== 00:29:38.930 [2024-11-20T14:39:27.890Z] Total : 3879.22 484.90 0.00 0.00 4120.23 1570.13 8519.68 00:29:38.930 { 00:29:38.930 "results": [ 00:29:38.930 { 00:29:38.930 "job": "nvme0n1", 00:29:38.930 "core_mask": "0x2", 00:29:38.930 "workload": "randwrite", 00:29:38.930 "status": "finished", 00:29:38.930 "queue_depth": 16, 00:29:38.930 "io_size": 131072, 00:29:38.930 "runtime": 2.003495, 00:29:38.930 "iops": 3879.2210611955607, 00:29:38.930 "mibps": 484.9026326494451, 00:29:38.930 "io_failed": 0, 00:29:38.930 "io_timeout": 0, 00:29:38.930 "avg_latency_us": 4120.228869445874, 00:29:38.930 "min_latency_us": 1570.1333333333334, 00:29:38.930 "max_latency_us": 8519.68 00:29:38.930 } 00:29:38.930 ], 00:29:38.930 "core_count": 1 00:29:38.930 } 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:38.930 | select(.opcode=="crc32c") 00:29:38.930 | "\(.module_name) \(.executed)"' 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 776920 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 776920 ']' 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 776920 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776920 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.930 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.931 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776920' 00:29:38.931 killing process with pid 776920 00:29:38.931 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 776920 00:29:38.931 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.931 00:29:38.931 Latency(us) 00:29:38.931 [2024-11-20T14:39:27.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.931 [2024-11-20T14:39:27.891Z] =================================================================================================================== 00:29:38.931 [2024-11-20T14:39:27.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.931 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 776920 00:29:39.191 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 774582 00:29:39.191 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 774582 ']' 00:29:39.191 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 774582 00:29:39.191 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:39.191 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.191 15:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774582 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774582' 00:29:39.191 killing process with pid 774582 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 774582 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 774582 00:29:39.191 00:29:39.191 real 0m16.336s 00:29:39.191 user 0m32.370s 00:29:39.191 sys 0m3.558s 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.191 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:39.191 ************************************ 00:29:39.191 END TEST nvmf_digest_clean 00:29:39.191 ************************************ 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:39.452 ************************************ 00:29:39.452 START TEST nvmf_digest_error 00:29:39.452 ************************************ 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=777778 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 777778 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 777778 ']' 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.452 15:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.452 [2024-11-20 15:39:28.267225] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:39.452 [2024-11-20 15:39:28.267283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.452 [2024-11-20 15:39:28.360551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.452 [2024-11-20 15:39:28.392904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.452 [2024-11-20 15:39:28.392936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.452 [2024-11-20 15:39:28.392941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.452 [2024-11-20 15:39:28.392947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.453 [2024-11-20 15:39:28.392951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.453 [2024-11-20 15:39:28.393440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.392 [2024-11-20 15:39:29.119437] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.392 null0 00:29:40.392 [2024-11-20 15:39:29.197185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.392 [2024-11-20 15:39:29.221393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=777881 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 777881 /var/tmp/bperf.sock 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 777881 ']' 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.392 15:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.392 [2024-11-20 15:39:29.288473] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:40.392 [2024-11-20 15:39:29.288539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777881 ] 00:29:40.652 [2024-11-20 15:39:29.372443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.652 [2024-11-20 15:39:29.402371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.222 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.222 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:41.222 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.222 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.483 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:41.483 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.483 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.483 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.483 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.483 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.743 nvme0n1 00:29:41.743 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:41.743 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.743 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.743 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.743 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:41.743 15:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.004 Running I/O for 2 seconds... 00:29:42.004 [2024-11-20 15:39:30.765972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.766004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.766014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.776406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.776427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.776434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.786495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.786513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.786520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.795621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.795639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.795645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.804250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.804267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.804275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.813221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.813238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.813244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.822444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.822462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.822468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.831332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.831349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.831357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.839853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.839871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.839878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.848962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.848978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.848985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.858791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.858808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.858814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.867666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.867683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.867689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.876942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.876959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.876966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.886971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.886988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.886998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.896761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.896779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.896785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.904553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.904570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.904577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.915620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.915637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.915643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.925827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.925845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.925851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.933793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.933811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.933818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.943459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.943477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.943483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.953637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.953654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.953660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.004 [2024-11-20 15:39:30.961880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.004 [2024-11-20 15:39:30.961897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.004 [2024-11-20 15:39:30.961903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:30.970507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:30.970529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:30.970535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:30.980147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:30.980169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:30.980175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:30.988338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:30.988355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:30.988362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:30.997308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:30.997325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:30.997331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.005502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.005520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.005526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.014849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.014866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.014873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.024191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.024208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.024215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.033617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.033635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.033642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.041850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.041867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.041874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.050401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.050418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.050425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.060672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.060689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.060696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.069461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.069478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.069485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.079530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.079547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.079553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.086884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.086901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.086908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.097990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.098008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.107221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.107238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.107244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.115874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.115891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.115897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.124390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.266 [2024-11-20 15:39:31.124407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.266 [2024-11-20 15:39:31.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.266 [2024-11-20 15:39:31.133482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.133500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.133507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.142713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.142730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.142736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.150460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.150477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.150484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.161472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.161489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.161496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.170018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.170035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.179543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.179560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.179566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.186883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.186900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.186906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.198315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.198333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.198340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.207538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.207558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.207565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.267 [2024-11-20 15:39:31.217453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.267 [2024-11-20 15:39:31.217470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.267 [2024-11-20 15:39:31.217476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.528 [2024-11-20 15:39:31.224981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.224998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.225005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.235031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.235047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.235054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.243818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.243835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.243842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.252312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.252329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.252336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.261538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.261555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.261561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.269764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.269781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.269788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.278020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.278037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.278043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.287557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.287574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.287580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.296585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.296602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.296608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.305353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.305370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.305376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.314512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.314530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.314536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.323245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.323268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.331965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.331982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.331989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.340694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.340711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.340717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.349390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.349407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.349413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.358968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.358985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.358995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.367617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.367634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.367640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.377005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.377022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.377029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.386769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.386785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.386792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.396106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.396123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.396129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.405109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.405126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.405132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.412845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.412862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.412868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.422637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.422654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.422661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.431573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.431590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.431597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.439359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.439375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.439382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.449089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.529 [2024-11-20 15:39:31.449106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.529 [2024-11-20 15:39:31.449113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.529 [2024-11-20 15:39:31.457642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.530 [2024-11-20 15:39:31.457659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.530 [2024-11-20 15:39:31.457666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.530 [2024-11-20 15:39:31.466918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.530 [2024-11-20 15:39:31.466935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.530 [2024-11-20 15:39:31.466941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.530 [2024-11-20 15:39:31.476960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.530 [2024-11-20 15:39:31.476977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.530 [2024-11-20 15:39:31.476983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.530 [2024-11-20 15:39:31.485373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.530 [2024-11-20 15:39:31.485390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.530 [2024-11-20 15:39:31.485397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.791 [2024-11-20 15:39:31.494676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.791 [2024-11-20 15:39:31.494693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.791 [2024-11-20 15:39:31.494699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.791 [2024-11-20 15:39:31.503450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.791 [2024-11-20 15:39:31.503467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.791 [2024-11-20 15:39:31.503473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.791 [2024-11-20 15:39:31.511759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.791 [2024-11-20 15:39:31.511776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.791 [2024-11-20 15:39:31.511785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.791 [2024-11-20 15:39:31.520767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.791 [2024-11-20 15:39:31.520784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.791 [2024-11-20 15:39:31.520790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.791 [2024-11-20 15:39:31.528991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.791 [2024-11-20 15:39:31.529008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.791 [2024-11-20 15:39:31.529014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.791 [2024-11-20 15:39:31.538632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.538649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.538656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.547172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.547189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.547196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.556778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.556795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.556802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.564672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.564691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.564697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.572722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.572739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.572745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.584495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.584511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.584518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.593616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.593636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.593642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.603108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.603125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.603131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.611575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.611592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.611598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.621069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.621085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.621091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.629399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.629415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.629422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.638034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.638050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.638057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.647431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.647447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.647454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.656364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.656381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.665533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.665549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.665556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.674735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.674752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.674758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.682734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.682750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.682757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.692195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.692212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.692218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.701188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.701205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.701211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.711667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.711690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.721089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.721105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.721111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.729974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.729991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.729997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 [2024-11-20 15:39:31.738813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.738830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.738838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.792 27773.00 IOPS, 108.49 MiB/s [2024-11-20T14:39:31.752Z] [2024-11-20 15:39:31.748630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:42.792 [2024-11-20 15:39:31.748643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.792 [2024-11-20 15:39:31.748653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.756797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.756813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.756819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.766250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.766267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.766273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.773925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.773942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.773948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.783726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.783743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.783749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.793252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.793269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.793276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.802210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.802227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.802233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.810906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.810922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.810928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.819299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.819315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.819322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.829364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.829387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.829394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.838081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.838098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.838105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.846758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.846775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.846781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.855690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.855707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.855713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.864116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.864133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.864139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.873433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.873457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.054 [2024-11-20 15:39:31.882538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.054 [2024-11-20 15:39:31.882555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.054 [2024-11-20 15:39:31.882561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.891945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.891961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.891968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.900984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.901000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.901007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.910217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.910239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.919025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.919041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.919048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.927343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.927359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.927366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.936202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.936218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.936225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.944457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.944474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.944480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.953756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.953773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.953779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.962702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.962719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.962725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.971142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.971163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.971169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.979636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.979652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.979661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.988853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.988870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.988876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:31.997814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:31.997830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:31.997837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.055 [2024-11-20 15:39:32.005582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.055 [2024-11-20 15:39:32.005598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.055 [2024-11-20 15:39:32.005605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.316 [2024-11-20 15:39:32.015295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.316 [2024-11-20 15:39:32.015312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.316 [2024-11-20 15:39:32.015319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.316 [2024-11-20 15:39:32.024626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.316 [2024-11-20 15:39:32.024643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.316 [2024-11-20 15:39:32.024650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.316 [2024-11-20 15:39:32.032891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.316 [2024-11-20 15:39:32.032908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.316 [2024-11-20 15:39:32.032914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.316 [2024-11-20 15:39:32.043427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.316 [2024-11-20 15:39:32.043444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.316 [2024-11-20 15:39:32.043450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.316 [2024-11-20 15:39:32.052144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.316 [2024-11-20 15:39:32.052164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.316 [2024-11-20 15:39:32.052171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.059591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.059608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.059614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.069537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.069553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.069560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.078835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.078858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.087527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.087543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.087550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.096429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.096445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.096452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.106039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.106056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.106062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.114681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.114698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.114704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.123334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.123350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.123357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.132677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.132694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.132704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.141315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.141331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.141338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.149582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.149599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.149606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.159210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.159227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.167292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.167308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.167314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.176661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.176677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.176684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.185882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.185898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.185904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.195226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.195243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.195249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.203143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.203163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.203170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.212191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.212211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.212217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.222305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.222321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.222327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.230432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.230448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.230454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.239546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.239562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.239569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.247473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.247489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.247496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.256952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.256968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.256975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.317 [2024-11-20 15:39:32.266264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.317 [2024-11-20 15:39:32.266280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.317 [2024-11-20 15:39:32.266286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.275459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.275476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.283981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.283997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.284003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.293338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.293355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.293361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.300924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.300941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.300948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.310307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.310323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.310329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.319993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.320010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.320016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.328582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.328599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.328605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.337142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.337163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.337170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.346327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.346344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.346350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.354899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.354915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.354921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.364142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.364164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.364174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.371619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.371636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.371642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.381020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.381036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.381042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.390852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.390869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.390875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.400068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.400085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.400091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.408315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.408332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.408339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.418079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.418095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.418101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.428065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.428081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.428087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.436696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.436713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.436719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.448523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.448542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.448549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.458733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.458749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.458755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.468058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.468074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.468081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.477342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.477358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.477365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.579 [2024-11-20 15:39:32.485535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.579 [2024-11-20 15:39:32.485552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.579 [2024-11-20 15:39:32.485558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.580 [2024-11-20 15:39:32.493767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.580 [2024-11-20 15:39:32.493784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.580 [2024-11-20 15:39:32.493790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.580 [2024-11-20 15:39:32.502877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.580 [2024-11-20 15:39:32.502894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.580 [2024-11-20 15:39:32.502901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.580 [2024-11-20 15:39:32.512715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.580 [2024-11-20 15:39:32.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.580 [2024-11-20 15:39:32.512738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.580 [2024-11-20 15:39:32.521199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.580 [2024-11-20 15:39:32.521216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.580 [2024-11-20 15:39:32.521223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.580 [2024-11-20 15:39:32.530189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.580 [2024-11-20 15:39:32.530206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.580 [2024-11-20 15:39:32.530212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.538863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.538880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.538886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.547359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.547375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.556210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.556227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.556233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.565177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.565194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.574716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.574732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.574739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.584394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.584411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.584417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.592335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.592352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.592358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.602437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.602454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.602463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.611840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.611857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.611863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.620341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.620358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.620365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.841 [2024-11-20 15:39:32.630833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.841 [2024-11-20 15:39:32.630850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.841 [2024-11-20 15:39:32.630856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.640216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.640233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.640240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.649406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.649423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.649429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.657931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.657948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.657954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.667428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.667446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.667452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.676646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.676662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.676668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.686540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.686557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.686563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.697065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.697082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.697089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.705265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.705282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.705288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.714977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.714994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.715000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.723359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.723376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.723382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.732702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.732719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.732725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 [2024-11-20 15:39:32.741992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.742009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.742015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 28016.50 IOPS, 109.44 MiB/s [2024-11-20T14:39:32.802Z] [2024-11-20 15:39:32.751559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ab0e0) 00:29:43.842 [2024-11-20 15:39:32.751575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.842 [2024-11-20 15:39:32.751582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.842 00:29:43.842 Latency(us) 00:29:43.842 [2024-11-20T14:39:32.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.842 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:43.842 nvme0n1 : 2.00 28036.68 109.52 0.00 0.00 4560.91 2239.15 18240.85 00:29:43.842 [2024-11-20T14:39:32.802Z] =================================================================================================================== 00:29:43.842 [2024-11-20T14:39:32.802Z] Total : 28036.68 109.52 0.00 0.00 4560.91 2239.15 18240.85 00:29:43.842 { 00:29:43.842 "results": [ 00:29:43.842 { 00:29:43.842 "job": "nvme0n1", 00:29:43.842 "core_mask": "0x2", 00:29:43.842 "workload": "randread", 00:29:43.842 "status": "finished", 00:29:43.842 "queue_depth": 128, 00:29:43.842 "io_size": 4096, 00:29:43.842 "runtime": 2.003126, 00:29:43.842 "iops": 28036.678671236856, 00:29:43.842 "mibps": 109.51827605951897, 00:29:43.842 "io_failed": 0, 00:29:43.842 "io_timeout": 0, 00:29:43.842 "avg_latency_us": 4560.914221138038, 00:29:43.842 "min_latency_us": 2239.1466666666665, 00:29:43.842 "max_latency_us": 18240.853333333333 00:29:43.842 } 00:29:43.842 ], 00:29:43.842 "core_count": 1 00:29:43.842 } 00:29:43.842 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:43.842 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:43.842 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:43.842 | .driver_specific 00:29:43.842 | .nvme_error 00:29:43.842 | .status_code 00:29:43.842 | .command_transient_transport_error' 00:29:43.842 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 777881 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 777881 ']' 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 777881 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.103 15:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777881 00:29:44.103 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.103 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.103 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777881' 00:29:44.103 killing process with pid 777881 00:29:44.103 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 777881 00:29:44.103 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.103 00:29:44.103 Latency(us) 00:29:44.103 [2024-11-20T14:39:33.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.103 [2024-11-20T14:39:33.063Z] =================================================================================================================== 00:29:44.103 [2024-11-20T14:39:33.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.103 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 777881 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=778711 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 778711 /var/tmp/bperf.sock 00:29:44.365 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 778711 ']' 00:29:44.366 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:44.366 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.366 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.366 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.366 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.366 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.366 [2024-11-20 15:39:33.179531] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:44.366 [2024-11-20 15:39:33.179589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778711 ] 00:29:44.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.366 Zero copy mechanism will not be used. 00:29:44.366 [2024-11-20 15:39:33.264375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.366 [2024-11-20 15:39:33.293993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.309 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.309 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:45.309 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.309 15:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.309 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:45.309 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.309 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.309 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.309 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.309 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.570 nvme0n1 00:29:45.570 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:45.570 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.570 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.570 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.570 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:45.570 15:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:45.570 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:45.570 Zero copy mechanism will not be used. 00:29:45.570 Running I/O for 2 seconds... 00:29:45.570 [2024-11-20 15:39:34.468548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.570 [2024-11-20 15:39:34.468582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.570 [2024-11-20 15:39:34.468591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.570 [2024-11-20 15:39:34.479880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.570 [2024-11-20 15:39:34.479904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.570 [2024-11-20 15:39:34.479911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.570 [2024-11-20 15:39:34.491465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.570 [2024-11-20 15:39:34.491486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.570 [2024-11-20 15:39:34.491493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.570 [2024-11-20 15:39:34.501239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.570 [2024-11-20 15:39:34.501258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.570 [2024-11-20 15:39:34.501264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.570 [2024-11-20 15:39:34.507446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.570 [2024-11-20 15:39:34.507464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.570 [2024-11-20 15:39:34.507471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.570 [2024-11-20 15:39:34.517990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.570 [2024-11-20 15:39:34.518009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.570 [2024-11-20 15:39:34.518016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.529474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.529493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.529500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.540151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.540177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.540185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.551270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.551293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.551300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.561206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.561225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.561232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.569909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.569928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.569935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.579332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.579352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.579358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.586950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.586969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.586975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.597872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.597890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.597897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.606346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.606365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.606371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.613198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.613217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.613223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.622284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.622302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.622309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.633087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.633106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.633112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.643932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.643952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.655573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.655592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.655599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.667257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.667276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.667283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.674954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.674973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.674979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.684280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.684299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.684306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.695027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.695046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.695053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.705546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.705565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.705571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.714595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.714614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.714625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.723423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.723442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.723449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.731553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.731572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.731579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.832 [2024-11-20 15:39:34.740095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.832 [2024-11-20 15:39:34.740114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.832 [2024-11-20 15:39:34.740121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.833 [2024-11-20 15:39:34.747257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.833 [2024-11-20 15:39:34.747276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.833 [2024-11-20 15:39:34.747283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:45.833 [2024-11-20 15:39:34.758538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.833 [2024-11-20 15:39:34.758558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.833 [2024-11-20 15:39:34.758564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:45.833 [2024-11-20 15:39:34.767273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.833 [2024-11-20 15:39:34.767293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.833 [2024-11-20 15:39:34.767299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:45.833 [2024-11-20 15:39:34.777252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.833 [2024-11-20 15:39:34.777271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.833 [2024-11-20 15:39:34.777277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:45.833 [2024-11-20 15:39:34.786247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:45.833 [2024-11-20 15:39:34.786267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.833 [2024-11-20 15:39:34.786274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.094 [2024-11-20 15:39:34.796377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.094 [2024-11-20 15:39:34.796400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.094 [2024-11-20 15:39:34.796406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.094 [2024-11-20 15:39:34.808213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.094 [2024-11-20 15:39:34.808232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.094 [2024-11-20 15:39:34.808238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.817092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.817111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.817118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.826015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.826035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.826041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.837974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.837993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.837999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.849725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.849743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.849749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.862717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.862736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.862743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.873660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.873679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.873685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.882360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.882379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.882386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.891948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.891967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.891973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.902106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.902131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.912853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.912873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.912880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.920946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.920965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.920972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.930381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.930401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.930408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.939256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.939275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.939281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.949020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.949039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.949045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.957209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.957228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.957234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.964545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.964564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.964574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.972790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.972815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.977053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.977072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.977079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.981938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.981957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.981963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:34.992745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:34.992765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:34.992771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:35.004302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:35.004322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:35.004329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:35.015873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:35.015892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:35.015898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:35.025276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:35.025295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:35.025301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:35.031557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:35.031575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:35.031582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:35.034394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:35.034412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:35.034418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.095 [2024-11-20 15:39:35.045361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.095 [2024-11-20 15:39:35.045379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.095 [2024-11-20 15:39:35.045386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.053348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.053366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.053372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.062721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.062739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.062745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.068813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.068831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.068838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.077038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.077055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.077062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.082451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.082470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.082476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.092718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.092736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.092742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.102508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.102527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.102533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.111616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.111641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.116064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.116083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.116089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.126384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.126403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.126409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.137728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.137747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.137753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.147906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.147925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.147931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.152512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.152531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.152537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.157275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.157300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.166356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.166375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.166381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.175303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.175325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.175332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.357 [2024-11-20 15:39:35.184163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.357 [2024-11-20 15:39:35.184181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.357 [2024-11-20 15:39:35.184187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.192642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.192660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.192667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.197432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.197457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.203909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.203928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.203934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.208936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.208956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.208965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.216713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.216733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.216739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.227425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.227444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.227452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.237563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.237583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.237590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.242432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.242451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.242459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.249498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.249517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.249523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.255685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.255704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.255712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.262539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.262558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.262564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.273859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.273878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.273884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.286444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.286464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.286471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.297280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.297299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.297305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.304305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.304325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.304331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.358 [2024-11-20 15:39:35.314360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.358 [2024-11-20 15:39:35.314379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.358 [2024-11-20 15:39:35.314388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.620 [2024-11-20 15:39:35.324945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.620 [2024-11-20 15:39:35.324964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.620 [2024-11-20 15:39:35.324970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.620 [2024-11-20 15:39:35.332413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.620 [2024-11-20 15:39:35.332431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.620 [2024-11-20 15:39:35.332437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.620 [2024-11-20 15:39:35.339184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.620 [2024-11-20 15:39:35.339203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.620 [2024-11-20 15:39:35.339209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.620 [2024-11-20 15:39:35.347725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.620 [2024-11-20 15:39:35.347744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.620 [2024-11-20 15:39:35.347750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.355557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.355576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.355583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.364662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.364680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.364686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.373812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.373831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.373838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.384365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.384383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.384389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.393831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.393854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.393860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.403912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.403930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.403936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.416032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.416051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.416058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.426624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.426643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.437145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.437177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.445229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.445247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.445254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.452662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.452681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.452687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.457431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.457449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.457456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.459848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.459867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.459873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 3442.00 IOPS, 430.25 MiB/s [2024-11-20T14:39:35.581Z] [2024-11-20 15:39:35.468375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.468394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.468400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.473709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.473728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.473734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.478184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.478203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.478209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.484680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.484699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.484705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.494034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.494052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.494059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.503377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.503396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.503402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.513053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.513072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.513078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.520664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.520683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.520690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.525146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.525169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.531615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.531634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.531641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.538910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.538929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.538935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.545075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.545094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.545100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.554702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.621 [2024-11-20 15:39:35.554721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.621 [2024-11-20 15:39:35.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.621 [2024-11-20 15:39:35.565604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.622 [2024-11-20 15:39:35.565624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.622 [2024-11-20 15:39:35.565630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.622 [2024-11-20 15:39:35.571536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.622 [2024-11-20 15:39:35.571555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.622 [2024-11-20 15:39:35.571562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.622 [2024-11-20 15:39:35.575890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.622 [2024-11-20 15:39:35.575909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.622 [2024-11-20 15:39:35.575915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.883 [2024-11-20 15:39:35.580379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.883 [2024-11-20 15:39:35.580398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.883 [2024-11-20 15:39:35.580405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.883 [2024-11-20 15:39:35.591733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.591755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.591761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.601379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.601398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.601404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.613196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.613214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.613221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.624366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.624385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.624391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.631475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.631493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.631500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.636060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.636079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.636085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.640554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.640573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.640579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.649432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.649450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.649456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.654309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.654328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.654334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.661517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.661535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.661541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.666186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.666205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.666212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.673568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.673587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.673595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.679024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.679043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.679050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.683448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.683467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.683473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.690338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.690358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.690365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.696112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.696130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.702202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.702222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.702228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.711340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.711362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.711368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.716942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.716960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.716967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.723307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.723326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.723332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.728037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.728055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.728062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.735511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.735530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.742012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.742030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.742037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.750602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.750621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.750628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.761952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.761971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.761978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.767306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.767325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.767331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.884 [2024-11-20 15:39:35.772594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.884 [2024-11-20 15:39:35.772613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.884 [2024-11-20 15:39:35.772619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.777300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.777319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.777325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.783497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.783515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.783521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.792659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.792678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.792684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.800210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.800228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.800235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.807075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.807093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.807101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.816350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.816368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.816375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.824822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.824841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.824848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:46.885 [2024-11-20 15:39:35.834286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:46.885 [2024-11-20 15:39:35.834305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.885 [2024-11-20 15:39:35.834314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.845148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.845171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.845178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.852248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.852267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.852273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.859877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.859895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.859901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.868272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.868290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.868296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.876472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.876491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.876498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.881405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.881424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.881430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.886251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.886269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.886276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.892558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.892577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.892584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.898340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.898361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.898367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.910626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.910644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.910650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.923553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.923571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.923577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.935064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.935083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.935089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.945324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.945342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.945349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.957532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.957550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.957556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.968273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.968290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.968297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.976834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.976853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.976859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.981527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.981544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.981551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.989811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.989829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.989835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:35.995919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:35.995938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:35.995944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:36.002740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:36.002758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:36.002765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:36.011021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:36.011039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:36.011046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:36.016865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:36.016883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:36.016889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:36.028505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:36.028523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:36.028530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.146 [2024-11-20 15:39:36.037842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.146 [2024-11-20 15:39:36.037862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.146 [2024-11-20 15:39:36.037868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.049546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.061628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.061647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.061656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.068408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.068426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.068432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.073977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.074003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.080977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.080995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.081002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.085767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.085786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.085792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.093941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.093960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.093966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.147 [2024-11-20 15:39:36.103931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.147 [2024-11-20 15:39:36.103950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.147 [2024-11-20 15:39:36.103956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.407 [2024-11-20 15:39:36.114990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.407 [2024-11-20 15:39:36.115009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.407 [2024-11-20 15:39:36.115015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.407 [2024-11-20 15:39:36.125301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.407 [2024-11-20 15:39:36.125320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.125327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.135701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.135723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.135730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.143862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.143880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.143886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.153679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.153698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.153704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.164314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.164334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.164340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.176660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.182820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.182838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.182845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.187949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.187968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.187974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.197168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.197185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.197192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.204677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.204696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.204702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.215206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.215225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.215232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.222233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.222258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.231301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.231319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.231325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.240765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.240784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.240790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.250929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.250948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.250954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.260824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.260842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.260849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.267718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.267736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.267743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.278415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.278433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.278440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.288497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.288515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.288526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.299734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.299752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.299759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.311016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.311035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.311041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.321821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.321839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.321846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.332272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.332291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.332297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.340867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.340885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.340892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.349446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.349465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.349471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.408 [2024-11-20 15:39:36.359516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.408 [2024-11-20 15:39:36.359534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.408 [2024-11-20 15:39:36.359540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.370287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.370306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.370312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.376208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.376230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.376236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.383367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.383386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.383394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.389601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.389619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.389626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.397374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.397393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.397399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.405588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.405606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.405613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.669 [2024-11-20 15:39:36.415482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.669 [2024-11-20 15:39:36.415501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.669 [2024-11-20 15:39:36.415507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.670 [2024-11-20 15:39:36.424703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.670 [2024-11-20 15:39:36.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.670 [2024-11-20 15:39:36.424728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:47.670 [2024-11-20 15:39:36.435925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.670 [2024-11-20 15:39:36.435945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.670 [2024-11-20 15:39:36.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:47.670 [2024-11-20 15:39:36.445154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.670 [2024-11-20 15:39:36.445177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.670 [2024-11-20 15:39:36.445184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.670 [2024-11-20 15:39:36.456803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2d750) 00:29:47.670 [2024-11-20 15:39:36.456821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.670 [2024-11-20 15:39:36.456828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:47.670 3614.50 IOPS, 451.81 MiB/s 00:29:47.670 Latency(us) 00:29:47.670 [2024-11-20T14:39:36.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.670 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:47.670 nvme0n1 : 2.00 3619.91 452.49 0.00 0.00 4417.53 617.81 17039.36 00:29:47.670 [2024-11-20T14:39:36.630Z] =================================================================================================================== 00:29:47.670 [2024-11-20T14:39:36.630Z] Total : 3619.91 452.49 0.00 0.00 4417.53 617.81 17039.36 00:29:47.670 { 00:29:47.670 "results": [ 00:29:47.670 { 00:29:47.670 "job": "nvme0n1", 00:29:47.670 "core_mask": "0x2", 00:29:47.670 "workload": "randread", 00:29:47.670 "status": "finished", 00:29:47.670 "queue_depth": 16, 00:29:47.670 "io_size": 131072, 00:29:47.670 "runtime": 2.00143, 00:29:47.670 "iops": 3619.911763089391, 00:29:47.670 "mibps": 452.4889703861739, 00:29:47.670 "io_failed": 0, 00:29:47.670 "io_timeout": 0, 00:29:47.670 "avg_latency_us": 4417.526812974465, 00:29:47.670 "min_latency_us": 617.8133333333334, 00:29:47.670 "max_latency_us": 17039.36 00:29:47.670 } 00:29:47.670 ], 00:29:47.670 "core_count": 1 00:29:47.670 } 00:29:47.670 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:47.670 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:47.670 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:47.670 | .driver_specific 00:29:47.670 | .nvme_error 00:29:47.670 | .status_code 00:29:47.670 | .command_transient_transport_error' 00:29:47.670 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:47.930 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:29:47.930 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 778711 00:29:47.930 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 778711 ']' 00:29:47.930 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 778711 00:29:47.930 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778711 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778711' 00:29:47.931 killing process with pid 778711 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 778711 00:29:47.931 Received shutdown signal, test time was about 2.000000 seconds 00:29:47.931 00:29:47.931 Latency(us) 00:29:47.931 [2024-11-20T14:39:36.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.931 [2024-11-20T14:39:36.891Z] =================================================================================================================== 00:29:47.931 [2024-11-20T14:39:36.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 778711 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=779462 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 779462 /var/tmp/bperf.sock 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 779462 ']' 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:47.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.931 15:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.931 [2024-11-20 15:39:36.882018] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:47.931 [2024-11-20 15:39:36.882075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779462 ] 00:29:48.192 [2024-11-20 15:39:36.966019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.192 [2024-11-20 15:39:36.994676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.763 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.763 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:48.763 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:48.763 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.025 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:49.025 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.025 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.025 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.025 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.025 15:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.285 nvme0n1 00:29:49.546 15:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:49.546 15:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.546 15:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.546 15:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.546 15:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:49.546 15:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:49.546 Running I/O for 2 seconds... 00:29:49.546 [2024-11-20 15:39:38.355257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e6fa8 00:29:49.546 [2024-11-20 15:39:38.356367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.356395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.363942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e5ec8 00:29:49.546 [2024-11-20 15:39:38.365004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.365023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.372477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e4de8 00:29:49.546 [2024-11-20 15:39:38.373595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.381019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e0ea0 00:29:49.546 [2024-11-20 15:39:38.382129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.382146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.389533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166dfdc0 00:29:49.546 [2024-11-20 15:39:38.390635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.390651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.398074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1710 00:29:49.546 [2024-11-20 15:39:38.399139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.399154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.406557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e27f0 00:29:49.546 [2024-11-20 15:39:38.407618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.407634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.415045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e38d0 00:29:49.546 [2024-11-20 15:39:38.416148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.416168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.423522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ec840 00:29:49.546 [2024-11-20 15:39:38.424619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.424635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.432006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ed920 00:29:49.546 [2024-11-20 15:39:38.433118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.433133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.440473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166eea00 00:29:49.546 [2024-11-20 15:39:38.441573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.441589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.448967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166efae0 00:29:49.546 [2024-11-20 15:39:38.450085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.450101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.457447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:49.546 [2024-11-20 15:39:38.458562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.458577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.465921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f1ca0 00:29:49.546 [2024-11-20 15:39:38.466987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.467003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.474389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f2d80 00:29:49.546 [2024-11-20 15:39:38.475485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.482853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f3e60 00:29:49.546 [2024-11-20 15:39:38.483950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.491320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e73e0 00:29:49.546 [2024-11-20 15:39:38.492394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.492410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.546 [2024-11-20 15:39:38.499800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e6300 00:29:49.546 [2024-11-20 15:39:38.500899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.546 [2024-11-20 15:39:38.500915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.508275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e5220 00:29:49.808 [2024-11-20 15:39:38.509372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.509388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.516731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e4140 00:29:49.808 [2024-11-20 15:39:38.517826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.517841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.525212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e01f8 00:29:49.808 [2024-11-20 15:39:38.526300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.526316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.533850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e12d8 00:29:49.808 [2024-11-20 15:39:38.534950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.534966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.542331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e23b8 00:29:49.808 [2024-11-20 15:39:38.543431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.543446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.550811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e3498 00:29:49.808 [2024-11-20 15:39:38.551931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.551946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.559263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ec408 00:29:49.808 [2024-11-20 15:39:38.560343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.560359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.567707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ed4e8 00:29:49.808 [2024-11-20 15:39:38.568807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.568823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.576154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ee5c8 00:29:49.808 [2024-11-20 15:39:38.577223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.577239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.584624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ef6a8 00:29:49.808 [2024-11-20 15:39:38.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.585747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.593090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0788 00:29:49.808 [2024-11-20 15:39:38.594184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.601556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f1868 00:29:49.808 [2024-11-20 15:39:38.602623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.602639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.609999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f2948 00:29:49.808 [2024-11-20 15:39:38.611092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.611108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.618454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f3a28 00:29:49.808 [2024-11-20 15:39:38.619546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.619563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.626914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f4b08 00:29:49.808 [2024-11-20 15:39:38.628006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.628023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.635392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e6fa8 00:29:49.808 [2024-11-20 15:39:38.636487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.636503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.643871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e5ec8 00:29:49.808 [2024-11-20 15:39:38.644985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.645000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.652368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e4de8 00:29:49.808 [2024-11-20 15:39:38.653480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.653496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.660814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e0ea0 00:29:49.808 [2024-11-20 15:39:38.661912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.661929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.669273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166dfdc0 00:29:49.808 [2024-11-20 15:39:38.670382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.670397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.677755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1710 00:29:49.808 [2024-11-20 15:39:38.678869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.678885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:49.808 [2024-11-20 15:39:38.685663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1710 00:29:49.808 [2024-11-20 15:39:38.686756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.808 [2024-11-20 15:39:38.686771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.693543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166de470 00:29:49.809 [2024-11-20 15:39:38.694303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.694319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.701915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fe2e8 00:29:49.809 [2024-11-20 15:39:38.702676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.702694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.710378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fe720 00:29:49.809 [2024-11-20 15:39:38.711122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.711137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.718833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fd208 00:29:49.809 [2024-11-20 15:39:38.719598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.719614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.727311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166eb328 00:29:49.809 [2024-11-20 15:39:38.728049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.728065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.735781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ea248 00:29:49.809 [2024-11-20 15:39:38.736537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.736553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.744224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e9168 00:29:49.809 [2024-11-20 15:39:38.744963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.744979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.752678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e8088 00:29:49.809 [2024-11-20 15:39:38.753424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.753439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:49.809 [2024-11-20 15:39:38.761114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1f80 00:29:49.809 [2024-11-20 15:39:38.761872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:49.809 [2024-11-20 15:39:38.761887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.769591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e3060 00:29:50.070 [2024-11-20 15:39:38.770345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.770360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.778062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ebfd0 00:29:50.070 [2024-11-20 15:39:38.778816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.778832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.786529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ed0b0 00:29:50.070 [2024-11-20 15:39:38.787242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.787258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.794985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ee190 00:29:50.070 [2024-11-20 15:39:38.795732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.795748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.803432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f96f8 00:29:50.070 [2024-11-20 15:39:38.804183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.804198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.811895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fa7d8 00:29:50.070 [2024-11-20 15:39:38.812637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.812652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.820354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fb8b8 00:29:50.070 [2024-11-20 15:39:38.821067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.821083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.828816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166de038 00:29:50.070 [2024-11-20 15:39:38.829519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.829535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.837268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166df118 00:29:50.070 [2024-11-20 15:39:38.838007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.838023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.845720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fef90 00:29:50.070 [2024-11-20 15:39:38.846473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.846489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.854157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fd640 00:29:50.070 [2024-11-20 15:39:38.854893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.854909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.862623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fc560 00:29:50.070 [2024-11-20 15:39:38.863380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.863396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.871086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ea680 00:29:50.070 [2024-11-20 15:39:38.871848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.871864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.879573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e95a0 00:29:50.070 [2024-11-20 15:39:38.880319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.880334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.888018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e84c0 00:29:50.070 [2024-11-20 15:39:38.888766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.888782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.896479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1b48 00:29:50.070 [2024-11-20 15:39:38.897206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.897222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.904921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e2c28 00:29:50.070 [2024-11-20 15:39:38.905745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.905760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.913467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e3d08 00:29:50.070 [2024-11-20 15:39:38.914184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.914199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.921951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ecc78 00:29:50.070 [2024-11-20 15:39:38.922661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.922680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.930406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166edd58 00:29:50.070 [2024-11-20 15:39:38.931168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.931184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.938867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f8a50 00:29:50.070 [2024-11-20 15:39:38.939611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.070 [2024-11-20 15:39:38.939626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.070 [2024-11-20 15:39:38.947329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f9b30 00:29:50.071 [2024-11-20 15:39:38.948070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.948085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:38.955795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fac10 00:29:50.071 [2024-11-20 15:39:38.956544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.956559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:38.964253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fbcf0 00:29:50.071 [2024-11-20 15:39:38.965008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.965024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:38.972714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166de470 00:29:50.071 [2024-11-20 15:39:38.973480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.973495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:38.981182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fe2e8 00:29:50.071 [2024-11-20 15:39:38.981920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.981935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:38.989628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fe720 00:29:50.071 [2024-11-20 15:39:38.990381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.990396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:38.998081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fd208 00:29:50.071 [2024-11-20 15:39:38.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:38.998852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:39.006559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166eb328 00:29:50.071 [2024-11-20 15:39:39.007295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:39.007311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:39.015018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ea248 00:29:50.071 [2024-11-20 15:39:39.015756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:39.015772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.071 [2024-11-20 15:39:39.023470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e9168 00:29:50.071 [2024-11-20 15:39:39.024204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.071 [2024-11-20 15:39:39.024220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.031945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e8088 00:29:50.333 [2024-11-20 15:39:39.032705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.032720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.040413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1f80 00:29:50.333 [2024-11-20 15:39:39.041152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.048879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e3060 00:29:50.333 [2024-11-20 15:39:39.049624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.049639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.057348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ebfd0 00:29:50.333 [2024-11-20 15:39:39.058105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.058120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.065816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ed0b0 00:29:50.333 [2024-11-20 15:39:39.066570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.066585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.074287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ee190 00:29:50.333 [2024-11-20 15:39:39.075049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.075064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.082735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f96f8 00:29:50.333 [2024-11-20 15:39:39.083476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.083492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.091195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fa7d8 00:29:50.333 [2024-11-20 15:39:39.091928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.091944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.099664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fb8b8 00:29:50.333 [2024-11-20 15:39:39.100373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.100389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.108130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166de038 00:29:50.333 [2024-11-20 15:39:39.108892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.333 [2024-11-20 15:39:39.108907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.333 [2024-11-20 15:39:39.116586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166df118 00:29:50.334 [2024-11-20 15:39:39.117341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.117356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.125036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fef90 00:29:50.334 [2024-11-20 15:39:39.125783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.125799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.133505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fd640 00:29:50.334 [2024-11-20 15:39:39.134228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.134244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.141984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fc560 00:29:50.334 [2024-11-20 15:39:39.142702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.142720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.150459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ea680 00:29:50.334 [2024-11-20 15:39:39.151198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.151213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.158909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e95a0 00:29:50.334 [2024-11-20 15:39:39.159617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.159632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.167378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e84c0 00:29:50.334 [2024-11-20 15:39:39.168132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.168147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.175828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1b48 00:29:50.334 [2024-11-20 15:39:39.176568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.176584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.184307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e2c28 00:29:50.334 [2024-11-20 15:39:39.185038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.185054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.192770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e3d08 00:29:50.334 [2024-11-20 15:39:39.193528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.193544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.201228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ecc78 00:29:50.334 [2024-11-20 15:39:39.201965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.201980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.209683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166edd58 00:29:50.334 [2024-11-20 15:39:39.210406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.210421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.218144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f8a50 00:29:50.334 [2024-11-20 15:39:39.218860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.218876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.226620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f9b30 00:29:50.334 [2024-11-20 15:39:39.227360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.227377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.235096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fac10 00:29:50.334 [2024-11-20 15:39:39.235856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.235872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.243562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fbcf0 00:29:50.334 [2024-11-20 15:39:39.244315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.244331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.252037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166de470 00:29:50.334 [2024-11-20 15:39:39.252787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.252802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.260507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fe2e8 00:29:50.334 [2024-11-20 15:39:39.261235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.261250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.268954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fe720 00:29:50.334 [2024-11-20 15:39:39.269692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.334 [2024-11-20 15:39:39.269707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.334 [2024-11-20 15:39:39.277513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166fd208 00:29:50.335 [2024-11-20 15:39:39.278220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.335 [2024-11-20 15:39:39.278236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.335 [2024-11-20 15:39:39.285977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166eb328 00:29:50.335 [2024-11-20 15:39:39.286740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.335 [2024-11-20 15:39:39.286755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.294446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ea248 00:29:50.596 [2024-11-20 15:39:39.295191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.295207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.302903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e9168 00:29:50.596 [2024-11-20 15:39:39.303645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.303661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.311374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e8088 00:29:50.596 [2024-11-20 15:39:39.312123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.312138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.319838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e1f80 00:29:50.596 [2024-11-20 15:39:39.320582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.320597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.328312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e3060 00:29:50.596 [2024-11-20 15:39:39.329063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.329078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.336789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166ebfd0 00:29:50.596 [2024-11-20 15:39:39.337534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.337549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:50.596 29843.00 IOPS, 116.57 MiB/s [2024-11-20T14:39:39.556Z] [2024-11-20 15:39:39.344813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f2d80 00:29:50.596 [2024-11-20 15:39:39.345550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.345566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.353567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166e7818 00:29:50.596 [2024-11-20 15:39:39.354262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.354278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.362792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.363045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.363063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.371516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.371768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.371783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.380256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.380498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.380512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.389027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.389262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.389278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.397719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.397956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.397971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.406462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.406691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.406706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.415211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.415530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.415545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.423916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.424162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.424177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.432636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.432770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.432784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.441312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.441589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.441603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.450057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.450333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.450349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.458760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.596 [2024-11-20 15:39:39.459014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.596 [2024-11-20 15:39:39.459029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.596 [2024-11-20 15:39:39.467517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.467754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.467769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.476297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.476594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.476609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.484998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.485260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.485275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.493671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.493971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.493987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.502392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.502694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.502710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.511132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.511428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.511444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.519889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.520232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.520248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.528634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.528893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.528908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.537501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.537623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.537638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.597 [2024-11-20 15:39:39.546245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.597 [2024-11-20 15:39:39.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.597 [2024-11-20 15:39:39.546509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.555001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.555268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.555283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.563788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.564095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.564111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.572472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.572729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.572744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.581231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.581513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.581528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.589925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.590197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.590216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.598635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.598882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.598897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.607386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.607712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.607728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.616092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.616344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.616360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.624814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.625084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.625099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.633512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.633752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.633767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.642230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.642527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.642543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.650976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.651246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.651261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.659689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.660042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.858 [2024-11-20 15:39:39.668448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.858 [2024-11-20 15:39:39.668726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.858 [2024-11-20 15:39:39.668742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.677304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.677585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.677601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.686054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.686308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.686330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.694817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.695082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.695098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.703573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.703834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.703848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.712317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.712507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.712522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.721075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.721360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.721376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.729805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.730073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.738605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.738834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.738849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.747338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.747559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.747574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.756099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.756379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.756395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.764854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.764969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.764984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.773618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.773888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.773904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.782336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.782558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.782573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.791121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.791421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.791437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.799873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.800120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.800135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:50.859 [2024-11-20 15:39:39.808620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:50.859 [2024-11-20 15:39:39.808879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.859 [2024-11-20 15:39:39.808895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.817327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.817586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.817604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.826056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.826318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.826333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.834760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.835011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.835026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.843507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.843759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.843774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.852263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.852536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.852553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.861028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.861296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.861318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.869768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.870003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.878495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.878784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.878798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.887245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.887518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.120 [2024-11-20 15:39:39.887539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.120 [2024-11-20 15:39:39.896097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.120 [2024-11-20 15:39:39.896350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.896366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.904874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.905248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.905264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.913597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.913892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.913908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.922370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.922632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.922647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.931146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.931390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.931405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.939883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.940180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.940196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.948663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.948910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.948925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.957418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.957659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.957673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.966138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.966395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.966411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.974893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.975107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.975122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.983660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.983798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.983812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:39.992398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:39.992677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:39.992693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.001125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.001382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.001397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.010354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.010605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.010622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.019187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.019451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.019466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.027922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.028186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.028202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.036660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.036924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.036945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.045437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.045705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.045723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.054155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.054451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.054468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.062892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.063009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.063024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.121 [2024-11-20 15:39:40.071622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.121 [2024-11-20 15:39:40.071892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.121 [2024-11-20 15:39:40.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.080389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.080655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.080670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.089132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.089397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.089414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.097871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.098150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.106632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.106893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.106909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.115364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.115627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.115643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.124132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.124420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.124438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.132889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.133168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.133183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.141626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.141886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.141902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.150371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.150599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.382 [2024-11-20 15:39:40.150615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.382 [2024-11-20 15:39:40.159093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.382 [2024-11-20 15:39:40.159257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.159272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.167822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.168071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.168085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.176548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.176807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.176823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.185288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.185531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.185547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.194011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.194304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.194321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.202710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.202978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.202995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.211449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.211674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.211689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.220189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.220478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.220494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.228938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.229201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.229217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.237644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.237899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.237921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.246369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.246521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.246537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.255127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.255377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.255392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.263887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.264114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.264128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.272659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.272871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.272886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.281528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.281788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.281804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.290222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.290517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.290533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.299018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.299280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.299296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.307793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.308050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.308065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.316571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.316831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.316845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.325354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.325580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.325595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.383 [2024-11-20 15:39:40.334117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.383 [2024-11-20 15:39:40.334403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.383 [2024-11-20 15:39:40.334419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.644 [2024-11-20 15:39:40.342895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f3d0) with pdu=0x2000166f0bc0 00:29:51.644 [2024-11-20 15:39:40.343585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.644 [2024-11-20 15:39:40.343601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:51.644 29554.50 IOPS, 115.45 MiB/s 00:29:51.644 Latency(us) 00:29:51.644 [2024-11-20T14:39:40.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.644 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.644 nvme0n1 : 2.00 29554.09 115.45 0.00 0.00 4323.79 2129.92 14964.05 00:29:51.644 [2024-11-20T14:39:40.604Z] =================================================================================================================== 00:29:51.644 [2024-11-20T14:39:40.604Z] Total : 29554.09 115.45 0.00 0.00 4323.79 2129.92 14964.05 00:29:51.644 { 00:29:51.644 "results": [ 00:29:51.644 { 00:29:51.644 "job": "nvme0n1", 00:29:51.644 "core_mask": "0x2", 00:29:51.644 "workload": "randwrite", 00:29:51.644 "status": "finished", 00:29:51.644 "queue_depth": 128, 00:29:51.644 "io_size": 4096, 00:29:51.644 "runtime": 2.004359, 00:29:51.644 "iops": 29554.086867671907, 00:29:51.644 "mibps": 115.44565182684339, 00:29:51.644 "io_failed": 0, 00:29:51.644 "io_timeout": 0, 00:29:51.644 "avg_latency_us": 4323.793110387089, 00:29:51.644 "min_latency_us": 2129.92, 00:29:51.644 "max_latency_us": 14964.053333333333 00:29:51.644 } 00:29:51.644 ], 00:29:51.644 "core_count": 1 00:29:51.644 } 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:51.644 | .driver_specific 00:29:51.644 | .nvme_error 00:29:51.644 | .status_code 00:29:51.644 | .command_transient_transport_error' 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 779462 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 779462 ']' 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 779462 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.644 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779462 00:29:51.903 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:51.903 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:51.903 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779462' 00:29:51.903 killing process with pid 779462 00:29:51.903 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 779462 00:29:51.903 Received shutdown signal, test time was about 2.000000 seconds 00:29:51.904 00:29:51.904 Latency(us) 00:29:51.904 [2024-11-20T14:39:40.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.904 [2024-11-20T14:39:40.864Z] =================================================================================================================== 00:29:51.904 [2024-11-20T14:39:40.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 779462 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=780181 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 780181 /var/tmp/bperf.sock 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 780181 ']' 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:51.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.904 15:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.904 [2024-11-20 15:39:40.765399] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:29:51.904 [2024-11-20 15:39:40.765457] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780181 ] 00:29:51.904 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:51.904 Zero copy mechanism will not be used. 00:29:51.904 [2024-11-20 15:39:40.848222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.162 [2024-11-20 15:39:40.877861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.731 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.731 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:52.731 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.731 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:53.018 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:53.018 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.018 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.018 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.018 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.018 15:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.277 nvme0n1 00:29:53.277 15:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:53.277 15:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.278 15:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.278 15:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.278 15:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:53.278 15:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:53.278 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:53.278 Zero copy mechanism will not be used. 00:29:53.278 Running I/O for 2 seconds... 00:29:53.278 [2024-11-20 15:39:42.108966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.109039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.109065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.117940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.118172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.118191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.125747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.125988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.126005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.134414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.134650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.134667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.143512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.143805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.143824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.151501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.151563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.151579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.157949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.158202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.158218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.166678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.166976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.166993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.175945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.176236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.176252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.183557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.183852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.183869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.194026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.194287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.194303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.205572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.205838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.205854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.217289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.217562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.217579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.278 [2024-11-20 15:39:42.228489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.278 [2024-11-20 15:39:42.228765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.278 [2024-11-20 15:39:42.228782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.538 [2024-11-20 15:39:42.239825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.538 [2024-11-20 15:39:42.240095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.538 [2024-11-20 15:39:42.240111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.538 [2024-11-20 15:39:42.250710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.538 [2024-11-20 15:39:42.250955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.250970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.261270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.261578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.273020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.273329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.273346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.284001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.284288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.284305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.295459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.295705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.295721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.306981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.307263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.307280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.317970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.318213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.318229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.328138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.328390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.328406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.338507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.338773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.338789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.349191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.349340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.349355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.359294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.359510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.359528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.369882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.370083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.370099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.379717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.379929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.379946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.390424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.390773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.390790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.400050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.400378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.408385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.408585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.408602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.415863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.416203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.416220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.425062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.425357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.425375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.434982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.435351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.435369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.441962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.442276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.442293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.451629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.451845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.451860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.460833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.461070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.461085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.471423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.471671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.539 [2024-11-20 15:39:42.471686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.539 [2024-11-20 15:39:42.478791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.539 [2024-11-20 15:39:42.479096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.540 [2024-11-20 15:39:42.479113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.540 [2024-11-20 15:39:42.486743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.540 [2024-11-20 15:39:42.486982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.540 [2024-11-20 15:39:42.486998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.540 [2024-11-20 15:39:42.495607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.540 [2024-11-20 15:39:42.495914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.540 [2024-11-20 15:39:42.495931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.505276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.505563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.505580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.513728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.514051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.514071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.523528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.523866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.523883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.530908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.531217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.531234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.537663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.537856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.537873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.547082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.547357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.547373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.553702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.554082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.554100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.563891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.564217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.564234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.572117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.572430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.572447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.581143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.581444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.581460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.592526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.592853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.592870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.601003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.601199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.601216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.612190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.612435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.612451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.622724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.622962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.632901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.633222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.633240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.641713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.641901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.641917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.649083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.649395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.649413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.659017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.659317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.800 [2024-11-20 15:39:42.659335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.800 [2024-11-20 15:39:42.667164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.800 [2024-11-20 15:39:42.667498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.667515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.675399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.675708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.675726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.684701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.684899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.684916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.692923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.693213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.693230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.700265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.700592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.700609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.709936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.710243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.710261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.718503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.718810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.718827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.728232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.728545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.728563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.738847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.739233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.739250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.748476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.748792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:53.801 [2024-11-20 15:39:42.754822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:53.801 [2024-11-20 15:39:42.755103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.801 [2024-11-20 15:39:42.755119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.765828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.766152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.766175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.774824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.775072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.775089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.786604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.786842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.786859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.797882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.798087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.798103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.809188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.809404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.809420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.820136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.820356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.820372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.831631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.832009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.843091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.843442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.843461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.853354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.853586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.853602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.864512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.864750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.864766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.875135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.875332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.875349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.886206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.886527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.886545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.898335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.898656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.898673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.908928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.909195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.909210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.920616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.920891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.920914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.932573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.071 [2024-11-20 15:39:42.932882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.071 [2024-11-20 15:39:42.932899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.071 [2024-11-20 15:39:42.943446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:42.943728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:42.943745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:42.953316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:42.953677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:42.953694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:42.963380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:42.963710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:42.963727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:42.975225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:42.975466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:42.975483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:42.986291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:42.986518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:42.986533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:42.997252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:42.997573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:42.997590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:43.008982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:43.009301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:43.009318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.072 [2024-11-20 15:39:43.020259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.072 [2024-11-20 15:39:43.020540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.072 [2024-11-20 15:39:43.020557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.031606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.031819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.031838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.043267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.043471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.054285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.054498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.054514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.065463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.065675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.065692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.073169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.073536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.073554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.083920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.084245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.084262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.094623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.094921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.094938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.332 3121.00 IOPS, 390.12 MiB/s [2024-11-20T14:39:43.292Z] [2024-11-20 15:39:43.105154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.105494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.105511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.115517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.115829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.115845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.123002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.123191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.123207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.128045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.128227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.128244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.132951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.133131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.133147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.141929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.332 [2024-11-20 15:39:43.142261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.332 [2024-11-20 15:39:43.142279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.332 [2024-11-20 15:39:43.149137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.149321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.149337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.156662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.156940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.156957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.162104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.162186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.162201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.167489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.167736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.167752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.173010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.173083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.173098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.177437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.177518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.177534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.183155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.183215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.183230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.188523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.188577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.188592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.193486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.193551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.193565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.201078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.201138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.201153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.205486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.205595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.205610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.212310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.212575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.212590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.222166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.222477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.222494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.232461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.232514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.232532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.242490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.242555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.242571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.252139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.252228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.252243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.262650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.262768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.262783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.272002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.272050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.272065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.276065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.276112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.276127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.279827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.279886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.279901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.333 [2024-11-20 15:39:43.283989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.333 [2024-11-20 15:39:43.284042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.333 [2024-11-20 15:39:43.284057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.593 [2024-11-20 15:39:43.291869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.593 [2024-11-20 15:39:43.291920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.593 [2024-11-20 15:39:43.291935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.300612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.300913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.300929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.311362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.311681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.311697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.321797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.321882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.332446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.332753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.332770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.343050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.343319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.343335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.353042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.353098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.353114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.363761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.364017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.373927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.373994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.380753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.380802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.380818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.386713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.386757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.386772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.391310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.391357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.391373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.398170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.398215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.398231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.405341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.405385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.405400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.410226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.410270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.410285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.414930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.414973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.414989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.418780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.418826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.418841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.425478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.425530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.425546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.430519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.430628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.430646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.440051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.440128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.449455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.449736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.449752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.459886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.460152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.460172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.470889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.471171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.471188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.482003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.482262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.482277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.492948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.493277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.493293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.504177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.504488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.504504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.515107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.515366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.594 [2024-11-20 15:39:43.515382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.594 [2024-11-20 15:39:43.526038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.594 [2024-11-20 15:39:43.526324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.595 [2024-11-20 15:39:43.526339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.595 [2024-11-20 15:39:43.536790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.595 [2024-11-20 15:39:43.537056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.595 [2024-11-20 15:39:43.537071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.595 [2024-11-20 15:39:43.547414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.595 [2024-11-20 15:39:43.547665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.595 [2024-11-20 15:39:43.547681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.558251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.558541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.558557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.569317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.569562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.569577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.579803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.580066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.580083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.590523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.590724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.590739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.601574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.601823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.601839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.610581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.610661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.610677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.621022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.621392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.621408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.631748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.631854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.631870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.641113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.641186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.641202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.650879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.650953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.650969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.856 [2024-11-20 15:39:43.655040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.856 [2024-11-20 15:39:43.655109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.856 [2024-11-20 15:39:43.655125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.657997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.658049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.658064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.660988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.661039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.661054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.664128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.664184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.664199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.667190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.667256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.670389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.670443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.670459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.676123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.676202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.676217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.680351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.680428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.680443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.686030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.686240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.686255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.693718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.693771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.693786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.697064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.697108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.697124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.700544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.700591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.700607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.704203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.704248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.704263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.708284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.708333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.708349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.711743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.711810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.711826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.715748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.715792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.715807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.720292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.720337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.720352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.725537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.725612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.725628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.733057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.733102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.733118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.736763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.736807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.736823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.740850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.740930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.740945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.747601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.747644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.747659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.751202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.751246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.751261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.755038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.755082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.755098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.758733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.758780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.758795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.763134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.763426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.763449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.769529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.769791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.769808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.857 [2024-11-20 15:39:43.775902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.857 [2024-11-20 15:39:43.775965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.857 [2024-11-20 15:39:43.775980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.779773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.779818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.779834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.783591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.783636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.783652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.786884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.786931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.786949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.790653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.790713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.790728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.796533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.796629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.796644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.800939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.801001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.801016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.805508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.805575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.805590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.858 [2024-11-20 15:39:43.809238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:54.858 [2024-11-20 15:39:43.809303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.858 [2024-11-20 15:39:43.809319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.816484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.816720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.118 [2024-11-20 15:39:43.816735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.821798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.821842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.118 [2024-11-20 15:39:43.821858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.825616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.825660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.118 [2024-11-20 15:39:43.825675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.829501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.829548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.118 [2024-11-20 15:39:43.829564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.833382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.833428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.118 [2024-11-20 15:39:43.833443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.841597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.841652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.118 [2024-11-20 15:39:43.841668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.118 [2024-11-20 15:39:43.846262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.118 [2024-11-20 15:39:43.846306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.846322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.853607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.853654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.853669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.859285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.859509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.859524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.868188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.868461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.868477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.876868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.876945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.876960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.886608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.886659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.886674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.894759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.894834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.894850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.902880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.902956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.910130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.910414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.910430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.916833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.917125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.917142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.923056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.923136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.923151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.931330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.931394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.940197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.940362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.940377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.951200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.951510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.961809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.962099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.972776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.973064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.973081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.983905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.984191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:43.995714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:43.995950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:43.995965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.007321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.007455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.007470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.018947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.019185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.019201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.030112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.030384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.030400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.041627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.041913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.041929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.053149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.053424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.053440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.062586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.062817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.062833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.119 [2024-11-20 15:39:44.071528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.119 [2024-11-20 15:39:44.071797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.119 [2024-11-20 15:39:44.071813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.380 [2024-11-20 15:39:44.080225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.380 [2024-11-20 15:39:44.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.380 [2024-11-20 15:39:44.080353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.380 [2024-11-20 15:39:44.086498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.380 [2024-11-20 15:39:44.086547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.380 [2024-11-20 15:39:44.086562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.380 [2024-11-20 15:39:44.095018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.380 [2024-11-20 15:39:44.095238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.380 [2024-11-20 15:39:44.095254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.380 [2024-11-20 15:39:44.102728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x138f710) with pdu=0x2000166ff3c8 00:29:55.380 [2024-11-20 15:39:44.102771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.380 [2024-11-20 15:39:44.102787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.380 3691.00 IOPS, 461.38 MiB/s 00:29:55.380 Latency(us) 00:29:55.380 [2024-11-20T14:39:44.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.380 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:55.380 nvme0n1 : 2.00 3693.91 461.74 0.00 0.00 4326.50 1385.81 15073.28 00:29:55.380 [2024-11-20T14:39:44.340Z] =================================================================================================================== 00:29:55.380 [2024-11-20T14:39:44.340Z] Total : 3693.91 461.74 0.00 0.00 4326.50 1385.81 15073.28 00:29:55.380 { 00:29:55.380 "results": [ 00:29:55.380 { 00:29:55.380 "job": "nvme0n1", 00:29:55.380 "core_mask": "0x2", 00:29:55.380 "workload": "randwrite", 00:29:55.380 "status": "finished", 00:29:55.380 "queue_depth": 16, 00:29:55.380 "io_size": 131072, 00:29:55.380 "runtime": 2.00357, 00:29:55.380 "iops": 3693.906377116846, 00:29:55.380 "mibps": 461.7382971396058, 00:29:55.380 "io_failed": 0, 00:29:55.380 "io_timeout": 0, 00:29:55.380 "avg_latency_us": 4326.500771967752, 00:29:55.380 "min_latency_us": 1385.8133333333333, 00:29:55.380 "max_latency_us": 15073.28 00:29:55.380 } 00:29:55.380 ], 00:29:55.380 "core_count": 1 00:29:55.380 } 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:55.380 | .driver_specific 00:29:55.380 | .nvme_error 00:29:55.380 | .status_code 00:29:55.380 | .command_transient_transport_error' 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 239 > 0 )) 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 780181 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 780181 ']' 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 780181 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.380 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780181 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780181' 00:29:55.640 killing process with pid 780181 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 780181 00:29:55.640 Received shutdown signal, test time was about 2.000000 seconds 00:29:55.640 00:29:55.640 Latency(us) 00:29:55.640 [2024-11-20T14:39:44.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.640 [2024-11-20T14:39:44.600Z] =================================================================================================================== 00:29:55.640 [2024-11-20T14:39:44.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 780181 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 777778 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 777778 ']' 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 777778 00:29:55.640 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777778 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777778' 00:29:55.641 killing process with pid 777778 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 777778 00:29:55.641 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 777778 00:29:55.902 00:29:55.902 real 0m16.447s 00:29:55.902 user 0m32.692s 00:29:55.902 sys 0m3.507s 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.902 ************************************ 00:29:55.902 END TEST nvmf_digest_error 00:29:55.902 ************************************ 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.902 rmmod nvme_tcp 00:29:55.902 rmmod nvme_fabrics 00:29:55.902 rmmod nvme_keyring 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 777778 ']' 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 777778 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 777778 ']' 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 777778 00:29:55.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (777778) - No such process 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 777778 is not found' 00:29:55.902 Process with pid 777778 is not found 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.902 15:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.450 00:29:58.450 real 0m42.943s 00:29:58.450 user 1m7.243s 00:29:58.450 sys 0m12.982s 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:58.450 ************************************ 00:29:58.450 END TEST nvmf_digest 00:29:58.450 ************************************ 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.450 ************************************ 00:29:58.450 START TEST nvmf_bdevperf 00:29:58.450 ************************************ 00:29:58.450 15:39:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:58.450 * Looking for test storage... 00:29:58.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.450 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:58.450 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:58.450 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:58.450 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:58.450 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.450 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:58.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.451 --rc genhtml_branch_coverage=1 00:29:58.451 --rc genhtml_function_coverage=1 00:29:58.451 --rc genhtml_legend=1 00:29:58.451 --rc geninfo_all_blocks=1 00:29:58.451 --rc geninfo_unexecuted_blocks=1 00:29:58.451 00:29:58.451 ' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:58.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.451 --rc genhtml_branch_coverage=1 00:29:58.451 --rc genhtml_function_coverage=1 00:29:58.451 --rc genhtml_legend=1 00:29:58.451 --rc geninfo_all_blocks=1 00:29:58.451 --rc geninfo_unexecuted_blocks=1 00:29:58.451 00:29:58.451 ' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:58.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.451 --rc genhtml_branch_coverage=1 00:29:58.451 --rc genhtml_function_coverage=1 00:29:58.451 --rc genhtml_legend=1 00:29:58.451 --rc geninfo_all_blocks=1 00:29:58.451 --rc geninfo_unexecuted_blocks=1 00:29:58.451 00:29:58.451 ' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:58.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.451 --rc genhtml_branch_coverage=1 00:29:58.451 --rc genhtml_function_coverage=1 00:29:58.451 --rc genhtml_legend=1 00:29:58.451 --rc geninfo_all_blocks=1 00:29:58.451 --rc geninfo_unexecuted_blocks=1 00:29:58.451 00:29:58.451 ' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.451 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.452 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.452 15:39:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:06.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:06.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:06.593 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:06.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:06.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:30:06.594 00:30:06.594 --- 10.0.0.2 ping statistics --- 00:30:06.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.594 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:30:06.594 00:30:06.594 --- 10.0.0.1 ping statistics --- 00:30:06.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.594 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=785129 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 785129 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 785129 ']' 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.594 15:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.594 [2024-11-20 15:39:54.839971] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:06.594 [2024-11-20 15:39:54.840035] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.594 [2024-11-20 15:39:54.940868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:06.594 [2024-11-20 15:39:54.993274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.594 [2024-11-20 15:39:54.993322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.594 [2024-11-20 15:39:54.993330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.594 [2024-11-20 15:39:54.993337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.594 [2024-11-20 15:39:54.993343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.594 [2024-11-20 15:39:54.995370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.594 [2024-11-20 15:39:54.995531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.594 [2024-11-20 15:39:54.995533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.856 [2024-11-20 15:39:55.726186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.856 Malloc0 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.856 [2024-11-20 15:39:55.798671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.856 { 00:30:06.856 "params": { 00:30:06.856 "name": "Nvme$subsystem", 00:30:06.856 "trtype": "$TEST_TRANSPORT", 00:30:06.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.856 "adrfam": "ipv4", 00:30:06.856 "trsvcid": "$NVMF_PORT", 00:30:06.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.856 "hdgst": ${hdgst:-false}, 00:30:06.856 "ddgst": ${ddgst:-false} 00:30:06.856 }, 00:30:06.856 "method": "bdev_nvme_attach_controller" 00:30:06.856 } 00:30:06.856 EOF 00:30:06.856 )") 00:30:06.856 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:07.117 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:07.117 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:07.117 15:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:07.117 "params": { 00:30:07.117 "name": "Nvme1", 00:30:07.117 "trtype": "tcp", 00:30:07.117 "traddr": "10.0.0.2", 00:30:07.117 "adrfam": "ipv4", 00:30:07.117 "trsvcid": "4420", 00:30:07.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.117 "hdgst": false, 00:30:07.117 "ddgst": false 00:30:07.117 }, 00:30:07.117 "method": "bdev_nvme_attach_controller" 00:30:07.117 }' 00:30:07.117 [2024-11-20 15:39:55.858207] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:07.118 [2024-11-20 15:39:55.858272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785231 ] 00:30:07.118 [2024-11-20 15:39:55.948597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.118 [2024-11-20 15:39:56.001235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.690 Running I/O for 1 seconds... 00:30:08.631 8739.00 IOPS, 34.14 MiB/s 00:30:08.631 Latency(us) 00:30:08.631 [2024-11-20T14:39:57.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.631 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.631 Verification LBA range: start 0x0 length 0x4000 00:30:08.631 Nvme1n1 : 1.01 8806.78 34.40 0.00 0.00 14471.07 2157.23 13926.40 00:30:08.631 [2024-11-20T14:39:57.591Z] =================================================================================================================== 00:30:08.631 [2024-11-20T14:39:57.591Z] Total : 8806.78 34.40 0.00 0.00 14471.07 2157.23 13926.40 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=785572 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.631 { 00:30:08.631 "params": { 00:30:08.631 "name": "Nvme$subsystem", 00:30:08.631 "trtype": "$TEST_TRANSPORT", 00:30:08.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.631 "adrfam": "ipv4", 00:30:08.631 "trsvcid": "$NVMF_PORT", 00:30:08.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.631 "hdgst": ${hdgst:-false}, 00:30:08.631 "ddgst": ${ddgst:-false} 00:30:08.631 }, 00:30:08.631 "method": "bdev_nvme_attach_controller" 00:30:08.631 } 00:30:08.631 EOF 00:30:08.631 )") 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:08.631 15:39:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:08.631 "params": { 00:30:08.631 "name": "Nvme1", 00:30:08.631 "trtype": "tcp", 00:30:08.631 "traddr": "10.0.0.2", 00:30:08.631 "adrfam": "ipv4", 00:30:08.631 "trsvcid": "4420", 00:30:08.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:08.631 "hdgst": false, 00:30:08.631 "ddgst": false 00:30:08.631 }, 00:30:08.631 "method": "bdev_nvme_attach_controller" 00:30:08.631 }' 00:30:08.631 [2024-11-20 15:39:57.547373] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:08.631 [2024-11-20 15:39:57.547432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785572 ] 00:30:08.891 [2024-11-20 15:39:57.633646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.891 [2024-11-20 15:39:57.668995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.153 Running I/O for 15 seconds... 00:30:11.035 11099.00 IOPS, 43.36 MiB/s [2024-11-20T14:40:00.571Z] 11209.50 IOPS, 43.79 MiB/s [2024-11-20T14:40:00.571Z] 15:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 785129 00:30:11.611 15:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:11.611 [2024-11-20 15:40:00.511949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.511990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.611 [2024-11-20 15:40:00.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.611 [2024-11-20 15:40:00.512676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.512984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.512994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.612 [2024-11-20 15:40:00.513357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.612 [2024-11-20 15:40:00.513364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.613 [2024-11-20 15:40:00.513642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.613 [2024-11-20 15:40:00.513949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.613 [2024-11-20 15:40:00.513959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.513966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.513976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.513983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.513993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.614 [2024-11-20 15:40:00.514051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.614 [2024-11-20 15:40:00.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc2150 is same with the state(6) to be set 00:30:11.614 [2024-11-20 15:40:00.514329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.614 [2024-11-20 15:40:00.514335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.614 [2024-11-20 15:40:00.514342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:30:11.614 [2024-11-20 15:40:00.514352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.614 [2024-11-20 15:40:00.514445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.614 [2024-11-20 15:40:00.514461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.614 [2024-11-20 15:40:00.514477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.614 [2024-11-20 15:40:00.514492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.614 [2024-11-20 15:40:00.514502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.614 [2024-11-20 15:40:00.518021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.614 [2024-11-20 15:40:00.518042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.614 [2024-11-20 15:40:00.518825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.614 [2024-11-20 15:40:00.518842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.614 [2024-11-20 15:40:00.518852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.614 [2024-11-20 15:40:00.519069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.614 [2024-11-20 15:40:00.519292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.614 [2024-11-20 15:40:00.519301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.614 [2024-11-20 15:40:00.519311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.614 [2024-11-20 15:40:00.519320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.614 [2024-11-20 15:40:00.532091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.614 [2024-11-20 15:40:00.532740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.614 [2024-11-20 15:40:00.532780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.614 [2024-11-20 15:40:00.532791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.614 [2024-11-20 15:40:00.533028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.614 [2024-11-20 15:40:00.533259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.614 [2024-11-20 15:40:00.533270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.614 [2024-11-20 15:40:00.533278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.614 [2024-11-20 15:40:00.533287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.614 [2024-11-20 15:40:00.545858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.614 [2024-11-20 15:40:00.546544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.614 [2024-11-20 15:40:00.546585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.614 [2024-11-20 15:40:00.546596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.614 [2024-11-20 15:40:00.546833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.615 [2024-11-20 15:40:00.547054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.615 [2024-11-20 15:40:00.547064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.615 [2024-11-20 15:40:00.547072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.615 [2024-11-20 15:40:00.547080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.615 [2024-11-20 15:40:00.559656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.615 [2024-11-20 15:40:00.560261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.615 [2024-11-20 15:40:00.560304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.615 [2024-11-20 15:40:00.560316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.615 [2024-11-20 15:40:00.560555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.615 [2024-11-20 15:40:00.560777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.615 [2024-11-20 15:40:00.560786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.615 [2024-11-20 15:40:00.560795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.615 [2024-11-20 15:40:00.560803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.877 [2024-11-20 15:40:00.573574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.877 [2024-11-20 15:40:00.574215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.877 [2024-11-20 15:40:00.574259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.877 [2024-11-20 15:40:00.574272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.877 [2024-11-20 15:40:00.574512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.877 [2024-11-20 15:40:00.574733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.877 [2024-11-20 15:40:00.574743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.877 [2024-11-20 15:40:00.574752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.877 [2024-11-20 15:40:00.574761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.877 [2024-11-20 15:40:00.587331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.877 [2024-11-20 15:40:00.588008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.877 [2024-11-20 15:40:00.588052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.877 [2024-11-20 15:40:00.588064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.877 [2024-11-20 15:40:00.588313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.877 [2024-11-20 15:40:00.588536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.877 [2024-11-20 15:40:00.588545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.877 [2024-11-20 15:40:00.588554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.877 [2024-11-20 15:40:00.588563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.877 [2024-11-20 15:40:00.601134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.877 [2024-11-20 15:40:00.601782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.877 [2024-11-20 15:40:00.601829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.877 [2024-11-20 15:40:00.601846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.877 [2024-11-20 15:40:00.602086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.877 [2024-11-20 15:40:00.602318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.877 [2024-11-20 15:40:00.602329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.877 [2024-11-20 15:40:00.602337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.877 [2024-11-20 15:40:00.602346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.877 [2024-11-20 15:40:00.614906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.877 [2024-11-20 15:40:00.615584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.877 [2024-11-20 15:40:00.615633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.877 [2024-11-20 15:40:00.615645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.877 [2024-11-20 15:40:00.615887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.877 [2024-11-20 15:40:00.616109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.877 [2024-11-20 15:40:00.616119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.877 [2024-11-20 15:40:00.616128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.877 [2024-11-20 15:40:00.616136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.877 [2024-11-20 15:40:00.628716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.877 [2024-11-20 15:40:00.629293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.877 [2024-11-20 15:40:00.629344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.877 [2024-11-20 15:40:00.629358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.877 [2024-11-20 15:40:00.629604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.877 [2024-11-20 15:40:00.629827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.877 [2024-11-20 15:40:00.629838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.877 [2024-11-20 15:40:00.629846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.877 [2024-11-20 15:40:00.629855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.877 [2024-11-20 15:40:00.642661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.877 [2024-11-20 15:40:00.643251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.877 [2024-11-20 15:40:00.643305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.877 [2024-11-20 15:40:00.643319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.877 [2024-11-20 15:40:00.643566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.877 [2024-11-20 15:40:00.643795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.877 [2024-11-20 15:40:00.643806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.877 [2024-11-20 15:40:00.643815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.643824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.656424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.657111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.657185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.657199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.657449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.657673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.657684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.657693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.657703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.670217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.670953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.671018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.671032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.671299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.671526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.671539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.671548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.671558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.684147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.684873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.684938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.684952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.685219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.685446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.685458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.685474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.685483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.698081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.698804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.698871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.698884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.699137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.699378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.699391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.699400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.699410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.711996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.712718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.712784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.712797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.713050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.713287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.713301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.713310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.713319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.725906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.726597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.726663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.726676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.726929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.727155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.727182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.727192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.727201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.739800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.740504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.740570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.740584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.740836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.741062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.741073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.741082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.741092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.753718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.754485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.754550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.754563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.754816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.755042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.755053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.755062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.755071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.767672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.768278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.768344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.768360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.768614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.768840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.768852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.878 [2024-11-20 15:40:00.768861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.878 [2024-11-20 15:40:00.768870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.878 [2024-11-20 15:40:00.781484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.878 [2024-11-20 15:40:00.782088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.878 [2024-11-20 15:40:00.782153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.878 [2024-11-20 15:40:00.782187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.878 [2024-11-20 15:40:00.782440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.878 [2024-11-20 15:40:00.782665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.878 [2024-11-20 15:40:00.782677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.879 [2024-11-20 15:40:00.782687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.879 [2024-11-20 15:40:00.782696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.879 [2024-11-20 15:40:00.795308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.879 [2024-11-20 15:40:00.796027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.879 [2024-11-20 15:40:00.796092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.879 [2024-11-20 15:40:00.796105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.879 [2024-11-20 15:40:00.796369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.879 [2024-11-20 15:40:00.796597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.879 [2024-11-20 15:40:00.796608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.879 [2024-11-20 15:40:00.796617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.879 [2024-11-20 15:40:00.796629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.879 [2024-11-20 15:40:00.809225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.879 [2024-11-20 15:40:00.809690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.879 [2024-11-20 15:40:00.809724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.879 [2024-11-20 15:40:00.809734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.879 [2024-11-20 15:40:00.809957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.879 [2024-11-20 15:40:00.810189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.879 [2024-11-20 15:40:00.810203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.879 [2024-11-20 15:40:00.810211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.879 [2024-11-20 15:40:00.810222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.879 [2024-11-20 15:40:00.823003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.879 [2024-11-20 15:40:00.823688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.879 [2024-11-20 15:40:00.823754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:11.879 [2024-11-20 15:40:00.823767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:11.879 [2024-11-20 15:40:00.824020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:11.879 [2024-11-20 15:40:00.824266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.879 [2024-11-20 15:40:00.824279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.879 [2024-11-20 15:40:00.824288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.879 [2024-11-20 15:40:00.824297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.141 [2024-11-20 15:40:00.836893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.141 [2024-11-20 15:40:00.837504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.141 [2024-11-20 15:40:00.837539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.141 [2024-11-20 15:40:00.837548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.141 [2024-11-20 15:40:00.837770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.141 [2024-11-20 15:40:00.837991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.141 [2024-11-20 15:40:00.838003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.141 [2024-11-20 15:40:00.838011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.141 [2024-11-20 15:40:00.838020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.141 [2024-11-20 15:40:00.850818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.141 [2024-11-20 15:40:00.851488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.141 [2024-11-20 15:40:00.851553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.141 [2024-11-20 15:40:00.851566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.141 [2024-11-20 15:40:00.851818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.141 [2024-11-20 15:40:00.852044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.141 [2024-11-20 15:40:00.852056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.141 [2024-11-20 15:40:00.852065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.141 [2024-11-20 15:40:00.852075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.141 [2024-11-20 15:40:00.864692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.141 [2024-11-20 15:40:00.865309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.141 [2024-11-20 15:40:00.865379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.865394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.865647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.865873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.865885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.865901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.865911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.878508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.879279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.879293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.879546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.879772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.879783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.879793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.879803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.892401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.893125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.893199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.893213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.893465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.893706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.893719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.893728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.893737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.906333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.907029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.907094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.907108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.907374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.907601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.907613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.907622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.907632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.920225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.920952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.921017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.921030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.921297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.921524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.921535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.921544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.921554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.934141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.934884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.934898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.935152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.935393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.935405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.935415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.935425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.948015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.948700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.948766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.948779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.949032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.949272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.949285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.949294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.949304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.961909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.962508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.962541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.962564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.962786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.963007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.963017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.963025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.963033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:00.975825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.976417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.976443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.976453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.976671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.976890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.976902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.976910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.976918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 9502.67 IOPS, 37.12 MiB/s [2024-11-20T14:40:01.102Z] [2024-11-20 15:40:00.989733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.142 [2024-11-20 15:40:00.990434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-11-20 15:40:00.990499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-11-20 15:40:00.990513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.142 [2024-11-20 15:40:00.990765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.142 [2024-11-20 15:40:00.990991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.142 [2024-11-20 15:40:00.991003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.142 [2024-11-20 15:40:00.991014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.142 [2024-11-20 15:40:00.991024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.142 [2024-11-20 15:40:01.003660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.004407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.004472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.004486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.004739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.004974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.004987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.004997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.005007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.143 [2024-11-20 15:40:01.017621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.018270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.018336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.018350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.018605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.018831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.018845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.018856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.018866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.143 [2024-11-20 15:40:01.031482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.032110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.032141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.032151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.032382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.032604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.032616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.032626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.032635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.143 [2024-11-20 15:40:01.045427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.046085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.046151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.046175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.046429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.046656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.046667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.046684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.046694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.143 [2024-11-20 15:40:01.059213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.059862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.059893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.059903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.060123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.060353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.060367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.060375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.060384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.143 [2024-11-20 15:40:01.072974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.073612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.073678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.073692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.073945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.074183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.074196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.074205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.074215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.143 [2024-11-20 15:40:01.086815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.143 [2024-11-20 15:40:01.087504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-11-20 15:40:01.087569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-11-20 15:40:01.087583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.143 [2024-11-20 15:40:01.087835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.143 [2024-11-20 15:40:01.088062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.143 [2024-11-20 15:40:01.088073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.143 [2024-11-20 15:40:01.088082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.143 [2024-11-20 15:40:01.088092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.100725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.101480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.101545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.101558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.101811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.102036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.102047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.102056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.102066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.113911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.114562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.114621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.114631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.114814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.114971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.114979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.114987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.114995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.126616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.127224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.127279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.127289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.127469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.127625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.127634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.127642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.127650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.139270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.139865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.139916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.139931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.140109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.140277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.140286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.140292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.140300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.151887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.152490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.152539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.152548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.152723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.152879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.152888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.152894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.152902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.164508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.165025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.165046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.165053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.165211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.165363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.165371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.165377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.165383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.177092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.177546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.177563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.177569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.177718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.177874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.177881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.177887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.177892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.189677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.190343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.190353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.190522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.190676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.190684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.190690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.190697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.202287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.202894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.202931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.202939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.203108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.203270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.203278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.203284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.203290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.214996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.215437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.215471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.215480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.215648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.215801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.406 [2024-11-20 15:40:01.215808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.406 [2024-11-20 15:40:01.215818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.406 [2024-11-20 15:40:01.215825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.406 [2024-11-20 15:40:01.227682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.406 [2024-11-20 15:40:01.228288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.406 [2024-11-20 15:40:01.228322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.406 [2024-11-20 15:40:01.228330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.406 [2024-11-20 15:40:01.228496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.406 [2024-11-20 15:40:01.228650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.228657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.228663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.228669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.240390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.240990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.241023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.241032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.241203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.241356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.241364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.241370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.241376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.253087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.253572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.253588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.253594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.253743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.253893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.253899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.253905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.253910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.265764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.266393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.266425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.266434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.266599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.266751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.266759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.266764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.266770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.278346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.278868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.278901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.278909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.279074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.279233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.279241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.279248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.279254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.290970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.291529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.291561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.291570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.291735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.291888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.291895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.291901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.291908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.303631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.304121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.304136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.304146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.304300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.304450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.304457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.304462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.304467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.316311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.316759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.316773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.316778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.316927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.317076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.317082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.317087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.317093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.328939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.329334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.329366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.329375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.329542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.329695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.329702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.329708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.329714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.341589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.342054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.342070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.342076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.342229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.342383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.407 [2024-11-20 15:40:01.342390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.407 [2024-11-20 15:40:01.342396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.407 [2024-11-20 15:40:01.342401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.407 [2024-11-20 15:40:01.354261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.407 [2024-11-20 15:40:01.354718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.407 [2024-11-20 15:40:01.354732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.407 [2024-11-20 15:40:01.354737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.407 [2024-11-20 15:40:01.354886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.407 [2024-11-20 15:40:01.355034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.408 [2024-11-20 15:40:01.355041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.408 [2024-11-20 15:40:01.355046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.408 [2024-11-20 15:40:01.355051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.668 [2024-11-20 15:40:01.366900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.668 [2024-11-20 15:40:01.367474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.668 [2024-11-20 15:40:01.367506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.668 [2024-11-20 15:40:01.367515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.668 [2024-11-20 15:40:01.367681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.668 [2024-11-20 15:40:01.367833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.668 [2024-11-20 15:40:01.367840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.668 [2024-11-20 15:40:01.367846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.668 [2024-11-20 15:40:01.367853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.668 [2024-11-20 15:40:01.379580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.668 [2024-11-20 15:40:01.380056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.668 [2024-11-20 15:40:01.380072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.668 [2024-11-20 15:40:01.380077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.668 [2024-11-20 15:40:01.380231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.668 [2024-11-20 15:40:01.380382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.668 [2024-11-20 15:40:01.380388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.668 [2024-11-20 15:40:01.380399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.668 [2024-11-20 15:40:01.380404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.392262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.392702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.392716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.392722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.392871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.393020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.393027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.393032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.393037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.404900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.405350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.405365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.405371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.405519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.405668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.405675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.405680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.405685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.417534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.418087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.418101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.418106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.418260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.418410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.418416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.418421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.418427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.430143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.430594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.430607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.430613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.430762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.430910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.430917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.430922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.430927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.442776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.443221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.443234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.443240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.443388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.443537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.443544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.443549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.443554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.455415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.455890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.455902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.455908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.456056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.456209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.456216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.456222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.456227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.468078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.468563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.468577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.468585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.468734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.468882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.468889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.468894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.468899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.480741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.481188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.481207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.481356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.481504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.481511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.481516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.481521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.493365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.493852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.493865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.493870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.494019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.494173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.494179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.494185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.494190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.506053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.506515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.506529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.506534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.506683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.506834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.506841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.506846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.506851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.518705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.519155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.519172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.519178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.519328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.519476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.519483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.519488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.519493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.531349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.531806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.531819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.531825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.531974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.532123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.532130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.532135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.532140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.544030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.544596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.544613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.544619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.544769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.669 [2024-11-20 15:40:01.544918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.669 [2024-11-20 15:40:01.544924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.669 [2024-11-20 15:40:01.544932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.669 [2024-11-20 15:40:01.544937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.669 [2024-11-20 15:40:01.556666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.669 [2024-11-20 15:40:01.557016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.669 [2024-11-20 15:40:01.557030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.669 [2024-11-20 15:40:01.557036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.669 [2024-11-20 15:40:01.557190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.670 [2024-11-20 15:40:01.557340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.670 [2024-11-20 15:40:01.557347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.670 [2024-11-20 15:40:01.557352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.670 [2024-11-20 15:40:01.557357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.670 [2024-11-20 15:40:01.569362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.670 [2024-11-20 15:40:01.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.670 [2024-11-20 15:40:01.569818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.670 [2024-11-20 15:40:01.569823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.670 [2024-11-20 15:40:01.569972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.670 [2024-11-20 15:40:01.570121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.670 [2024-11-20 15:40:01.570127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.670 [2024-11-20 15:40:01.570132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.670 [2024-11-20 15:40:01.570137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.670 [2024-11-20 15:40:01.581993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.670 [2024-11-20 15:40:01.582475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.670 [2024-11-20 15:40:01.582488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.670 [2024-11-20 15:40:01.582494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.670 [2024-11-20 15:40:01.582642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.670 [2024-11-20 15:40:01.582791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.670 [2024-11-20 15:40:01.582797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.670 [2024-11-20 15:40:01.582802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.670 [2024-11-20 15:40:01.582807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.670 [2024-11-20 15:40:01.594669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.670 [2024-11-20 15:40:01.595146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.670 [2024-11-20 15:40:01.595164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.670 [2024-11-20 15:40:01.595170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.670 [2024-11-20 15:40:01.595318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.670 [2024-11-20 15:40:01.595468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.670 [2024-11-20 15:40:01.595474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.670 [2024-11-20 15:40:01.595480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.670 [2024-11-20 15:40:01.595485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.670 [2024-11-20 15:40:01.607348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.670 [2024-11-20 15:40:01.607788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.670 [2024-11-20 15:40:01.607801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.670 [2024-11-20 15:40:01.607806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.670 [2024-11-20 15:40:01.607955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.670 [2024-11-20 15:40:01.608103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.670 [2024-11-20 15:40:01.608110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.670 [2024-11-20 15:40:01.608115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.670 [2024-11-20 15:40:01.608120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.670 [2024-11-20 15:40:01.619975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.670 [2024-11-20 15:40:01.620485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.670 [2024-11-20 15:40:01.620499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.670 [2024-11-20 15:40:01.620505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.670 [2024-11-20 15:40:01.620653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.670 [2024-11-20 15:40:01.620802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.670 [2024-11-20 15:40:01.620809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.670 [2024-11-20 15:40:01.620814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.670 [2024-11-20 15:40:01.620820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.931 [2024-11-20 15:40:01.632677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.931 [2024-11-20 15:40:01.633156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.931 [2024-11-20 15:40:01.633174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.931 [2024-11-20 15:40:01.633182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.931 [2024-11-20 15:40:01.633331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.931 [2024-11-20 15:40:01.633480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.633487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.633492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.633497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.645356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.645804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.645817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.645823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.645971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.646121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.646127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.646132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.646137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.658004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.658577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.658609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.658618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.658782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.658934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.658941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.658947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.658954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.670680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.671170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.671187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.671193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.671343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.671497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.671504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.671509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.671514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.683378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.683961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.683993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.684002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.684175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.684330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.684337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.684344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.684350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.696020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.696507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.696524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.696529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.696679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.696828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.696835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.696840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.696845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.708702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.709197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.709219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.709225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.709379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.709529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.709536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.709545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.709551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.721274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.721623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.721638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.721644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.721792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.721942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.721948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.721953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.721959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.733959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.734437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.734451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.734457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.734605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.734755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.734761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.734766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.932 [2024-11-20 15:40:01.734771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.932 [2024-11-20 15:40:01.746626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.932 [2024-11-20 15:40:01.747062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.932 [2024-11-20 15:40:01.747075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.932 [2024-11-20 15:40:01.747081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.932 [2024-11-20 15:40:01.747234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.932 [2024-11-20 15:40:01.747384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.932 [2024-11-20 15:40:01.747391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.932 [2024-11-20 15:40:01.747396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.933 [2024-11-20 15:40:01.747400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.933 [2024-11-20 15:40:01.759274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.933 [2024-11-20 15:40:01.759841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.933 [2024-11-20 15:40:01.759873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.933 [2024-11-20 15:40:01.759882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.933 [2024-11-20 15:40:01.760046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.933 [2024-11-20 15:40:01.760206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.933 [2024-11-20 15:40:01.760215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.933 [2024-11-20 15:40:01.760221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.933 [2024-11-20 15:40:01.760226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.977 [2024-11-20 15:40:01.771946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.977 [2024-11-20 15:40:01.772441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.977 [2024-11-20 15:40:01.772459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.977 [2024-11-20 15:40:01.772465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.977 [2024-11-20 15:40:01.772614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.977 [2024-11-20 15:40:01.772763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.772770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.772775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.772781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.784643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.785061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.785075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.785080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.785235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.785385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.785391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.785396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.785402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.797268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.797714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.797728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.797737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.797886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.798035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.798041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.798047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.798052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.809916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.810347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.810361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.810366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.810515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.810664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.810670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.810676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.810680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.822539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.823021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.823034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.823040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.823193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.823343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.823350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.823355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.823360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.835219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.835701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.835714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.835720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.835868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.836020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.836027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.836032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.836037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.847895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.848374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.848388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.848394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.848543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.848691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.848698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.848703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.848708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.860574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.861060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.861074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.861080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.861234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.861383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.861390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.861395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.861400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.873253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.978 [2024-11-20 15:40:01.873791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.978 [2024-11-20 15:40:01.873823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.978 [2024-11-20 15:40:01.873832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.978 [2024-11-20 15:40:01.873996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.978 [2024-11-20 15:40:01.874149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.978 [2024-11-20 15:40:01.874156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.978 [2024-11-20 15:40:01.874174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.978 [2024-11-20 15:40:01.874181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.978 [2024-11-20 15:40:01.885903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.979 [2024-11-20 15:40:01.886373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.979 [2024-11-20 15:40:01.886390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:12.979 [2024-11-20 15:40:01.886396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:12.979 [2024-11-20 15:40:01.886545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:12.979 [2024-11-20 15:40:01.886695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.979 [2024-11-20 15:40:01.886701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.979 [2024-11-20 15:40:01.886707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.979 [2024-11-20 15:40:01.886712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.241 [2024-11-20 15:40:01.898577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.241 [2024-11-20 15:40:01.899072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.241 [2024-11-20 15:40:01.899085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.241 [2024-11-20 15:40:01.899091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.241 [2024-11-20 15:40:01.899244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.241 [2024-11-20 15:40:01.899394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.241 [2024-11-20 15:40:01.899401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.241 [2024-11-20 15:40:01.899407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.241 [2024-11-20 15:40:01.899412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.241 [2024-11-20 15:40:01.911266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.241 [2024-11-20 15:40:01.911742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.241 [2024-11-20 15:40:01.911755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.241 [2024-11-20 15:40:01.911760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.241 [2024-11-20 15:40:01.911908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.241 [2024-11-20 15:40:01.912058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.241 [2024-11-20 15:40:01.912064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.241 [2024-11-20 15:40:01.912069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.241 [2024-11-20 15:40:01.912074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.241 [2024-11-20 15:40:01.923938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.241 [2024-11-20 15:40:01.924497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.241 [2024-11-20 15:40:01.924529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:01.924538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:01.924702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:01.924855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:01.924862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:01.924869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:01.924875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 [2024-11-20 15:40:01.936592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:01.937088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:01.937104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:01.937110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:01.937263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:01.937413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:01.937420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:01.937425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:01.937430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 [2024-11-20 15:40:01.949275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:01.949792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:01.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:01.949833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:01.949997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:01.950149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:01.950156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:01.950171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:01.950177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 [2024-11-20 15:40:01.961909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:01.962360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:01.962377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:01.962387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:01.962537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:01.962687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:01.962693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:01.962698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:01.962703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 [2024-11-20 15:40:01.974566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:01.975053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:01.975067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:01.975072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:01.975227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:01.975376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:01.975382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:01.975388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:01.975393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 7127.00 IOPS, 27.84 MiB/s [2024-11-20T14:40:02.202Z] [2024-11-20 15:40:01.988110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:01.988683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:01.988716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:01.988725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:01.988889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:01.989042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:01.989050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:01.989055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:01.989061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 [2024-11-20 15:40:02.000794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:02.001309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:02.001326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.242 [2024-11-20 15:40:02.001332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.242 [2024-11-20 15:40:02.001482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.242 [2024-11-20 15:40:02.001635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.242 [2024-11-20 15:40:02.001642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.242 [2024-11-20 15:40:02.001647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.242 [2024-11-20 15:40:02.001652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.242 [2024-11-20 15:40:02.013502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.242 [2024-11-20 15:40:02.013980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.242 [2024-11-20 15:40:02.013994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.013999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.014147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.014302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.014309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.014314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.014319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.026166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.026627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.026640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.026646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.026795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.026944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.026950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.026956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.026961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.038800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.039386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.039418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.039427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.039591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.039743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.039751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.039760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.039766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.051480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.052034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.052066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.052075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.052246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.052400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.052407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.052413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.052419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.064130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.064687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.064719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.064727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.064892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.065044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.065051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.065056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.065062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.076771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.077127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.077145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.077151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.077304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.077455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.077461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.077466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.077472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.089458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.089946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.089960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.089965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.243 [2024-11-20 15:40:02.090114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.243 [2024-11-20 15:40:02.090269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.243 [2024-11-20 15:40:02.090276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.243 [2024-11-20 15:40:02.090282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.243 [2024-11-20 15:40:02.090287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.243 [2024-11-20 15:40:02.102143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.243 [2024-11-20 15:40:02.102735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.243 [2024-11-20 15:40:02.102767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.243 [2024-11-20 15:40:02.102776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.102941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.103093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.103100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.103107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.103113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.114825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.115278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.115310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.115319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.115486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.115638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.115645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.115652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.115658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.127508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.128110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.128142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.128154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.128326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.128480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.128487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.128493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.128499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.140210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.140690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.140720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.140729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.140894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.141046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.141053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.141059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.141065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.152787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.153368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.153400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.153409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.153573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.153725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.153732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.153738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.153744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.165463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.166053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.166085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.166094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.166265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.166421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.166428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.166434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.166440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.178140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.178590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.178607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.178613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.178762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.244 [2024-11-20 15:40:02.178912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.244 [2024-11-20 15:40:02.178918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.244 [2024-11-20 15:40:02.178923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.244 [2024-11-20 15:40:02.178928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.244 [2024-11-20 15:40:02.190777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.244 [2024-11-20 15:40:02.191360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.244 [2024-11-20 15:40:02.191392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.244 [2024-11-20 15:40:02.191401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.244 [2024-11-20 15:40:02.191565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.245 [2024-11-20 15:40:02.191718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.245 [2024-11-20 15:40:02.191725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.245 [2024-11-20 15:40:02.191731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.245 [2024-11-20 15:40:02.191737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.506 [2024-11-20 15:40:02.203474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.506 [2024-11-20 15:40:02.204064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.506 [2024-11-20 15:40:02.204096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.506 [2024-11-20 15:40:02.204105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.506 [2024-11-20 15:40:02.204277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.506 [2024-11-20 15:40:02.204430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.506 [2024-11-20 15:40:02.204437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.506 [2024-11-20 15:40:02.204451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.506 [2024-11-20 15:40:02.204457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.506 [2024-11-20 15:40:02.216166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.506 [2024-11-20 15:40:02.216719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.506 [2024-11-20 15:40:02.216751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.506 [2024-11-20 15:40:02.216760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.506 [2024-11-20 15:40:02.216924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.506 [2024-11-20 15:40:02.217076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.506 [2024-11-20 15:40:02.217083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.506 [2024-11-20 15:40:02.217089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.506 [2024-11-20 15:40:02.217095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.506 [2024-11-20 15:40:02.228804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.506 [2024-11-20 15:40:02.229300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.506 [2024-11-20 15:40:02.229317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.506 [2024-11-20 15:40:02.229323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.506 [2024-11-20 15:40:02.229472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.506 [2024-11-20 15:40:02.229623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.506 [2024-11-20 15:40:02.229630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.506 [2024-11-20 15:40:02.229635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.506 [2024-11-20 15:40:02.229640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.506 [2024-11-20 15:40:02.241476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.506 [2024-11-20 15:40:02.241916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.506 [2024-11-20 15:40:02.241929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.506 [2024-11-20 15:40:02.241935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.506 [2024-11-20 15:40:02.242083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.506 [2024-11-20 15:40:02.242237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.506 [2024-11-20 15:40:02.242244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.506 [2024-11-20 15:40:02.242249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.506 [2024-11-20 15:40:02.242254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.506 [2024-11-20 15:40:02.254094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.506 [2024-11-20 15:40:02.254690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.506 [2024-11-20 15:40:02.254722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.506 [2024-11-20 15:40:02.254731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.254895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.255048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.255055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.255061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.255068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.266783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.267383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.267415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.267424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.267588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.267740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.267747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.267753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.267759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.279463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.279958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.279974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.279980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.280129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.280285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.280292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.280298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.280303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.292141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.292680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.292712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.292724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.292889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.293042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.293049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.293055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.293061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.304796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.305306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.305338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.305347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.305513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.305665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.305672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.305678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.305684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.317399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.317892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.317908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.317914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.318063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.318218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.318226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.318231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.318237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.330070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.330623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.330655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.330664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.330828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.330984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.330991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.330997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.331005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.342711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.343307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.343339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.343348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.507 [2024-11-20 15:40:02.343512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.507 [2024-11-20 15:40:02.343665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.507 [2024-11-20 15:40:02.343672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.507 [2024-11-20 15:40:02.343677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.507 [2024-11-20 15:40:02.343683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.507 [2024-11-20 15:40:02.355404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.507 [2024-11-20 15:40:02.355910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.507 [2024-11-20 15:40:02.355927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.507 [2024-11-20 15:40:02.355932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.356082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.356237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.356244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.356249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.356254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.368087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.368632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.368664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.368673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.368837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.368989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.368997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.369006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.369013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.380728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.381210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.381232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.381239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.381394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.381544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.381551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.381557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.381562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.393424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.393865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.393879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.393884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.394033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.394187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.394194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.394200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.394204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.406066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.406522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.406537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.406543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.406691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.406840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.406847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.406852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.406857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.418720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.419203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.419218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.419223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.419372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.419523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.419530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.419537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.419544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.431402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.431885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.431898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.431904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.432052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.432206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.432212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.432218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.432224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.444075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.444560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.444574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.444579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.444727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.444877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.444883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.444889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.444894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.508 [2024-11-20 15:40:02.456757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.508 [2024-11-20 15:40:02.457264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.508 [2024-11-20 15:40:02.457302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.508 [2024-11-20 15:40:02.457315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.508 [2024-11-20 15:40:02.457482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.508 [2024-11-20 15:40:02.457634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.508 [2024-11-20 15:40:02.457641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.508 [2024-11-20 15:40:02.457646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.508 [2024-11-20 15:40:02.457652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.770 [2024-11-20 15:40:02.469370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.770 [2024-11-20 15:40:02.469963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.770 [2024-11-20 15:40:02.469994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.770 [2024-11-20 15:40:02.470003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.770 [2024-11-20 15:40:02.470175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.770 [2024-11-20 15:40:02.470329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.770 [2024-11-20 15:40:02.470336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.770 [2024-11-20 15:40:02.470342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.770 [2024-11-20 15:40:02.470347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.770 [2024-11-20 15:40:02.482052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.770 [2024-11-20 15:40:02.482646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.770 [2024-11-20 15:40:02.482678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.770 [2024-11-20 15:40:02.482687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.770 [2024-11-20 15:40:02.482851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.770 [2024-11-20 15:40:02.483003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.770 [2024-11-20 15:40:02.483010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.770 [2024-11-20 15:40:02.483017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.770 [2024-11-20 15:40:02.483023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.770 [2024-11-20 15:40:02.494736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.770 [2024-11-20 15:40:02.495199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.770 [2024-11-20 15:40:02.495221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.770 [2024-11-20 15:40:02.495227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.770 [2024-11-20 15:40:02.495382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.770 [2024-11-20 15:40:02.495537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.770 [2024-11-20 15:40:02.495544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.770 [2024-11-20 15:40:02.495549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.770 [2024-11-20 15:40:02.495554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.770 [2024-11-20 15:40:02.507409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.770 [2024-11-20 15:40:02.507952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.770 [2024-11-20 15:40:02.507984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.770 [2024-11-20 15:40:02.507993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.770 [2024-11-20 15:40:02.508164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.770 [2024-11-20 15:40:02.508318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.770 [2024-11-20 15:40:02.508324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.770 [2024-11-20 15:40:02.508330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.770 [2024-11-20 15:40:02.508337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.770 [2024-11-20 15:40:02.520040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.770 [2024-11-20 15:40:02.520506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.770 [2024-11-20 15:40:02.520522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.770 [2024-11-20 15:40:02.520528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.770 [2024-11-20 15:40:02.520677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.770 [2024-11-20 15:40:02.520827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.520834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.520839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.520844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.532709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.533244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.533276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.533285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.533452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.533604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.533611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.533620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.533627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.545341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.545872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.545903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.545912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.546076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.546236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.546244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.546250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.546256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.557968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.558509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.558541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.558550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.558714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.558867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.558874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.558880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.558887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.570598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.571231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.571263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.571272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.571436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.571589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.571596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.571602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.571608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.583244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.583741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.583758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.583764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.583913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.584062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.584069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.584074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.584079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.595951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.596545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.596577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.596586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.596750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.596902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.596910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.596916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.596922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.608639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.609174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.609205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.609214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.609378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.609529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.609537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.609542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.609548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.621255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.621827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.771 [2024-11-20 15:40:02.621858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.771 [2024-11-20 15:40:02.621871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.771 [2024-11-20 15:40:02.622036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.771 [2024-11-20 15:40:02.622196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.771 [2024-11-20 15:40:02.622204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.771 [2024-11-20 15:40:02.622210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.771 [2024-11-20 15:40:02.622217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.771 [2024-11-20 15:40:02.633916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.771 [2024-11-20 15:40:02.634467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.634500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.634508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.634672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.634825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.634831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.634837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.634843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.646553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.647050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.647066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.647072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.647226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.647376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.647383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.647388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.647394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.659119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.659669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.659709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.659874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.660030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.660038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.660044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.660050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.671756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.672278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.672310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.672319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.672485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.672638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.672645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.672651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.672657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.684366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.684955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.684986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.684995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.685166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.685320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.685327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.685332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.685338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.697040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.697502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.697519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.697525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.697674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.697823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.697829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.697838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.697844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.709694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.710278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.710311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.710320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.710486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.710638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.710645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.710651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.710657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.772 [2024-11-20 15:40:02.722364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.772 [2024-11-20 15:40:02.722955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.772 [2024-11-20 15:40:02.722986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:13.772 [2024-11-20 15:40:02.722995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:13.772 [2024-11-20 15:40:02.723168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:13.772 [2024-11-20 15:40:02.723321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.772 [2024-11-20 15:40:02.723328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.772 [2024-11-20 15:40:02.723334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.772 [2024-11-20 15:40:02.723341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.034 [2024-11-20 15:40:02.735046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.034 [2024-11-20 15:40:02.735412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.034 [2024-11-20 15:40:02.735429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.034 [2024-11-20 15:40:02.735435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.034 [2024-11-20 15:40:02.735584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.034 [2024-11-20 15:40:02.735733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.034 [2024-11-20 15:40:02.735740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.034 [2024-11-20 15:40:02.735746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.034 [2024-11-20 15:40:02.735751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.034 [2024-11-20 15:40:02.747734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.034 [2024-11-20 15:40:02.748180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.034 [2024-11-20 15:40:02.748194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.034 [2024-11-20 15:40:02.748200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.034 [2024-11-20 15:40:02.748348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.034 [2024-11-20 15:40:02.748498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.034 [2024-11-20 15:40:02.748504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.034 [2024-11-20 15:40:02.748510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.034 [2024-11-20 15:40:02.748515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.034 [2024-11-20 15:40:02.760359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.034 [2024-11-20 15:40:02.760909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.034 [2024-11-20 15:40:02.760941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.034 [2024-11-20 15:40:02.760950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.034 [2024-11-20 15:40:02.761114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.034 [2024-11-20 15:40:02.761274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.034 [2024-11-20 15:40:02.761282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.034 [2024-11-20 15:40:02.761288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.034 [2024-11-20 15:40:02.761294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.034 [2024-11-20 15:40:02.772995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.034 [2024-11-20 15:40:02.773596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.034 [2024-11-20 15:40:02.773629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.034 [2024-11-20 15:40:02.773638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.034 [2024-11-20 15:40:02.773802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.034 [2024-11-20 15:40:02.773955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.034 [2024-11-20 15:40:02.773962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.034 [2024-11-20 15:40:02.773968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.034 [2024-11-20 15:40:02.773974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.034 [2024-11-20 15:40:02.785692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.034 [2024-11-20 15:40:02.786283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.034 [2024-11-20 15:40:02.786315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.034 [2024-11-20 15:40:02.786327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.034 [2024-11-20 15:40:02.786491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.034 [2024-11-20 15:40:02.786643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.034 [2024-11-20 15:40:02.786650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.034 [2024-11-20 15:40:02.786656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.034 [2024-11-20 15:40:02.786662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.034 [2024-11-20 15:40:02.798368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.034 [2024-11-20 15:40:02.798922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.798953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.798962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.799126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.799286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.799294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.799300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.799306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.811017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.811536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.811568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.811577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.811741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.811894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.811901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.811906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.811913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.823625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.824200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.824232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.824241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.824407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.824566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.824573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.824579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.824585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.836291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.836888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.836920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.836929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.837093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.837253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.837261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.837267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.837274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.848975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.849583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.849615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.849624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.849789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.849941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.849949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.849955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.849961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.861677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.862168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.862184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.862190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.862339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.862489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.862495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.862504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.862509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.874347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.874794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.874808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.874814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.874963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.875112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.875118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.875123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.875128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.886963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.887501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.035 [2024-11-20 15:40:02.887533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.035 [2024-11-20 15:40:02.887542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.035 [2024-11-20 15:40:02.887707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.035 [2024-11-20 15:40:02.887859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.035 [2024-11-20 15:40:02.887866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.035 [2024-11-20 15:40:02.887872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.035 [2024-11-20 15:40:02.887879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.035 [2024-11-20 15:40:02.899588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.035 [2024-11-20 15:40:02.900204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.900236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.900245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.900418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.900571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.900578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.900584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.900590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 [2024-11-20 15:40:02.912162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.912759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.912791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.912800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.912964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.913116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.913123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.913130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.913135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 [2024-11-20 15:40:02.924844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.925468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.925499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.925508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.925672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.925824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.925831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.925837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.925844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 [2024-11-20 15:40:02.937549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.938120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.938152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.938167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.938332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.938485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.938491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.938497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.938504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 [2024-11-20 15:40:02.950207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.950775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.950807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.950819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.950984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.951136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.951144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.951149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.951156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 [2024-11-20 15:40:02.962872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.963452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.963484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.963492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.963657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.963809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.963816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.963822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.963828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 [2024-11-20 15:40:02.975532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.976078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.976109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.976118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.976291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.976444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.976451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.976458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.976464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.036 5701.60 IOPS, 22.27 MiB/s [2024-11-20T14:40:02.996Z] [2024-11-20 15:40:02.989300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.036 [2024-11-20 15:40:02.989826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.036 [2024-11-20 15:40:02.989858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.036 [2024-11-20 15:40:02.989867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.036 [2024-11-20 15:40:02.990031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.036 [2024-11-20 15:40:02.990195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.036 [2024-11-20 15:40:02.990203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.036 [2024-11-20 15:40:02.990209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.036 [2024-11-20 15:40:02.990215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.298 [2024-11-20 15:40:03.001933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.298 [2024-11-20 15:40:03.002538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.298 [2024-11-20 15:40:03.002571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.298 [2024-11-20 15:40:03.002579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.298 [2024-11-20 15:40:03.002744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.298 [2024-11-20 15:40:03.002896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.298 [2024-11-20 15:40:03.002903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.298 [2024-11-20 15:40:03.002910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.298 [2024-11-20 15:40:03.002916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.298 [2024-11-20 15:40:03.014631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.298 [2024-11-20 15:40:03.015134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.298 [2024-11-20 15:40:03.015173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.298 [2024-11-20 15:40:03.015182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.298 [2024-11-20 15:40:03.015346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.298 [2024-11-20 15:40:03.015498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.015505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.015511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.015517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.027231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.027826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.027858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.027867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.028031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.028190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.028198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.028207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.028213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.039925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.040519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.040551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.040560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.040724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.040876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.040883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.040889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.040895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.052611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.053063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.053085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.053238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.053388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.053395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.053402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.053409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.065271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.065718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.065732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.065738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.065887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.066036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.066042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.066047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.066052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.077909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.078478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.078510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.078520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.078684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.078837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.078845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.078850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.078857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.090575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.090914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.090931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.090937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.091087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.091243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.091251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.091257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.091263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.103189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.103739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.103770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.103779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.103943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.104095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.104103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.104110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.104116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.115844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.116488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.116521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.116533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.116698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.116850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.116857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.116863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.116869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.128444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.129017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.129049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.129058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.129228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.129381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.299 [2024-11-20 15:40:03.129388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.299 [2024-11-20 15:40:03.129393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.299 [2024-11-20 15:40:03.129399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.299 [2024-11-20 15:40:03.141110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.299 [2024-11-20 15:40:03.141610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.299 [2024-11-20 15:40:03.141626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.299 [2024-11-20 15:40:03.141632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.299 [2024-11-20 15:40:03.141781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.299 [2024-11-20 15:40:03.141930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.141937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.141943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.141948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.153793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.154189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.154203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.154209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.154359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.154513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.154519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.154524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.154529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.166383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.166966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.166998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.167007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.167177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.167330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.167338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.167345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.167352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.179063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.179635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.179667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.179677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.179843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.179995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.180002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.180008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.180014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.191728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.192120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.192136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.192142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.192296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.192447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.192453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.192463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.192469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.204323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.204812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.204826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.204832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.204983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.205132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.205139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.205144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.205149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.216992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.217400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.217414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.217420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.217569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.217718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.217724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.217730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.217735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.229572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.230049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.230062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.230068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.230221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.230370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.230376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.230382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.230387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.242239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.242813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.242845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.242854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.243018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.243176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.243184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.243189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.243196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.300 [2024-11-20 15:40:03.254907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.300 [2024-11-20 15:40:03.255246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.300 [2024-11-20 15:40:03.255263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.300 [2024-11-20 15:40:03.255269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.300 [2024-11-20 15:40:03.255419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.300 [2024-11-20 15:40:03.255568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.300 [2024-11-20 15:40:03.255575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.300 [2024-11-20 15:40:03.255580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.300 [2024-11-20 15:40:03.255585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.267583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.267935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.267950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.267956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.268105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.268259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.268266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.268271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.268277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.280261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.280832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.280864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.280876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.281041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.281199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.281207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.281214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.281220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.292927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.293408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.293448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.293613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.293765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.293772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.293778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.293785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.305503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.306049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.306081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.306090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.306263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.306416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.306423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.306428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.306434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.318145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.318680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.318711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.318720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.318884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.319039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.319046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.319052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.319058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.330774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.331142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.331163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.331170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.331319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.331467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.331473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.331479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.331483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.343478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.343950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.343962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.343968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.344116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.344269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.344276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.344281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.344286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.356125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.356666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.356696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.356705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.356869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.357021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.564 [2024-11-20 15:40:03.357028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.564 [2024-11-20 15:40:03.357038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.564 [2024-11-20 15:40:03.357044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.564 [2024-11-20 15:40:03.368774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.564 [2024-11-20 15:40:03.369119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-11-20 15:40:03.369134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.564 [2024-11-20 15:40:03.369140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.564 [2024-11-20 15:40:03.369293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.564 [2024-11-20 15:40:03.369443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.369448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.369453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.369459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.381449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.381891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.381905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.381910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.382059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.382211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.382217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.382222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.382227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.394075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.394540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.394553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.394558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.394707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.394855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.394862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.394867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.394872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.406732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.407214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.407228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.407233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.407381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.407529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.407535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.407540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.407545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.419392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.419869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.419881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.419886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.420034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.420187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.420193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.420199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.420204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.432046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.432482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.432494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.432499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.432647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.432796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.432801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.432806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.432811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.444653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.445132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.445144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.445156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.445309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.445457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.445463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.445468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.445473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.457318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.457807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.457819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.457825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.457972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.458121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.458127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.458132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.458137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.469990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.470438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.470450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.470456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.470604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.470752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.470758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.470763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.470768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.482616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.483062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.483074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.483079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.483232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.565 [2024-11-20 15:40:03.483385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.565 [2024-11-20 15:40:03.483391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.565 [2024-11-20 15:40:03.483396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.565 [2024-11-20 15:40:03.483400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.565 [2024-11-20 15:40:03.495246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.565 [2024-11-20 15:40:03.495599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-11-20 15:40:03.495611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.565 [2024-11-20 15:40:03.495616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.565 [2024-11-20 15:40:03.495764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.566 [2024-11-20 15:40:03.495912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.566 [2024-11-20 15:40:03.495917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.566 [2024-11-20 15:40:03.495922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.566 [2024-11-20 15:40:03.495927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.566 [2024-11-20 15:40:03.507916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 785129 Killed "${NVMF_APP[@]}" "$@" 00:30:14.566 [2024-11-20 15:40:03.508384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-11-20 15:40:03.508396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.566 [2024-11-20 15:40:03.508402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.566 [2024-11-20 15:40:03.508550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.566 [2024-11-20 15:40:03.508698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.566 [2024-11-20 15:40:03.508704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.566 [2024-11-20 15:40:03.508709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.566 [2024-11-20 15:40:03.508713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=786793 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 786793 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 786793 ']' 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.566 15:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:14.566 [2024-11-20 15:40:03.520561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.566 [2024-11-20 15:40:03.520932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-11-20 15:40:03.520944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.566 [2024-11-20 15:40:03.520949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.834 [2024-11-20 15:40:03.521098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.834 [2024-11-20 15:40:03.521251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.834 [2024-11-20 15:40:03.521258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.834 [2024-11-20 15:40:03.521263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.834 [2024-11-20 15:40:03.521268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.834 [2024-11-20 15:40:03.533255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.834 [2024-11-20 15:40:03.533735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.834 [2024-11-20 15:40:03.533747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.834 [2024-11-20 15:40:03.533753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.834 [2024-11-20 15:40:03.533901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.834 [2024-11-20 15:40:03.534049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.834 [2024-11-20 15:40:03.534055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.834 [2024-11-20 15:40:03.534060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.834 [2024-11-20 15:40:03.534065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.834 [2024-11-20 15:40:03.545916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.834 [2024-11-20 15:40:03.546402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.834 [2024-11-20 15:40:03.546415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.834 [2024-11-20 15:40:03.546421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.834 [2024-11-20 15:40:03.546569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.834 [2024-11-20 15:40:03.546717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.834 [2024-11-20 15:40:03.546726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.834 [2024-11-20 15:40:03.546731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.834 [2024-11-20 15:40:03.546736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.834 [2024-11-20 15:40:03.558594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.834 [2024-11-20 15:40:03.558924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.834 [2024-11-20 15:40:03.558937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.834 [2024-11-20 15:40:03.558942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.559091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.559243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.559250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.559255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.559260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.570718] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:14.835 [2024-11-20 15:40:03.570763] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.835 [2024-11-20 15:40:03.571257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.571718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.571731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.571736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.571884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.572033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.572039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.572045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.572050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.583900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.584576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.584606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.584615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.584781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.584933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.584943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.584949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.584955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.596542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.597112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.597142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.597152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.597323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.597475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.597482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.597487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.597493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.609150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.609731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.609761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.609770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.609935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.610086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.610093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.610098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.610104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.621825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.622450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.622480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.622489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.622653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.622806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.622812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.622818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.622827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.634408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.634858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.634873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.634879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.635028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.635181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.635188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.635193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.635198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.647046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.647583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.647596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.647602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.835 [2024-11-20 15:40:03.647750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.835 [2024-11-20 15:40:03.647899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.835 [2024-11-20 15:40:03.647905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.835 [2024-11-20 15:40:03.647910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.835 [2024-11-20 15:40:03.647915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.835 [2024-11-20 15:40:03.659638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.835 [2024-11-20 15:40:03.660130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.835 [2024-11-20 15:40:03.660142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.835 [2024-11-20 15:40:03.660148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.660300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.660449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.660455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.660460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.660465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.660971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.836 [2024-11-20 15:40:03.672321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.672808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.672821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.672827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.672976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.673124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.673130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.673135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.673140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.684945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.685501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.685533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.685542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.685708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.685861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.685867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.685873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.685879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.690279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.836 [2024-11-20 15:40:03.690301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.836 [2024-11-20 15:40:03.690308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.836 [2024-11-20 15:40:03.690314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.836 [2024-11-20 15:40:03.690319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.836 [2024-11-20 15:40:03.691520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.836 [2024-11-20 15:40:03.691684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.836 [2024-11-20 15:40:03.691718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.836 [2024-11-20 15:40:03.697609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.698210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.698242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.698251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.698421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.698577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.698585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.698590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.698596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.710193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.710787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.710818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.710828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.710993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.711145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.711151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.711157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.711171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.722889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.723361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.723391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.723400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.723566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.723717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.723723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.723729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.723735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.735600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.735975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.735992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.735998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.736148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.736303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.736309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.736319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.736324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.748179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.748592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.836 [2024-11-20 15:40:03.748605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.836 [2024-11-20 15:40:03.748611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.836 [2024-11-20 15:40:03.748760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.836 [2024-11-20 15:40:03.748909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.836 [2024-11-20 15:40:03.748914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.836 [2024-11-20 15:40:03.748919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.836 [2024-11-20 15:40:03.748924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.836 [2024-11-20 15:40:03.760788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.836 [2024-11-20 15:40:03.761250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.837 [2024-11-20 15:40:03.761281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.837 [2024-11-20 15:40:03.761289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.837 [2024-11-20 15:40:03.761454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.837 [2024-11-20 15:40:03.761606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.837 [2024-11-20 15:40:03.761613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.837 [2024-11-20 15:40:03.761619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.837 [2024-11-20 15:40:03.761625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.837 [2024-11-20 15:40:03.773481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.837 [2024-11-20 15:40:03.774048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.837 [2024-11-20 15:40:03.774078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:14.837 [2024-11-20 15:40:03.774087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:14.837 [2024-11-20 15:40:03.774258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:14.837 [2024-11-20 15:40:03.774411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.837 [2024-11-20 15:40:03.774417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.837 [2024-11-20 15:40:03.774422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.837 [2024-11-20 15:40:03.774429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.837 [2024-11-20 15:40:03.786138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.786675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.786706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.786715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.786880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.787032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.787038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.787044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.787049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.798768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.799114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.799129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.799135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.799289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.799438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.799443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.799449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.799454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.811458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.811898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.811913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.811918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.812066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.812219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.812225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.812230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.812235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.824077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.824607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.824638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.824651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.824815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.824967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.824973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.824979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.824985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.836700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.837411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.837441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.837450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.837615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.837766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.837772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.837778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.837784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.849355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.849853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.849869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.849874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.850023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.850176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.850182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.850187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.850192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.862038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.862608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.862639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.862647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.862812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.862967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.862974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.862979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.862985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.874695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.875262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.875293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.875302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.875466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.875618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.875624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.875629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.875635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.887346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.887934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.887964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.149 [2024-11-20 15:40:03.887973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.149 [2024-11-20 15:40:03.888137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.149 [2024-11-20 15:40:03.888295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.149 [2024-11-20 15:40:03.888302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.149 [2024-11-20 15:40:03.888308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.149 [2024-11-20 15:40:03.888314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.149 [2024-11-20 15:40:03.900017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.149 [2024-11-20 15:40:03.900578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.149 [2024-11-20 15:40:03.900609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.900618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.900782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.900934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.900940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.900945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.900954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.912673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.913252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.913293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.913301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.913468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.913619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.913625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.913630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.913636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.925347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.925714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.925729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.925735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.925883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.926031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.926037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.926042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.926047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.938029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.938601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.938632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.938641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.938805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.938957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.938963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.938968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.938974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.950683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.951154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.951174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.951180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.951329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.951478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.951483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.951489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.951494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.963342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.963824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.963829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.963977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.964126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.964132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.964136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.964141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.975990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.976309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.976322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.976327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.976475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.976623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.976628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.976633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.976638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:03.988624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:03.989082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:03.989094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:03.989103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:03.989255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:03.989404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:03.989409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:03.989414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:03.989419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 4751.33 IOPS, 18.56 MiB/s [2024-11-20T14:40:04.110Z] [2024-11-20 15:40:04.001326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:04.001867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:04.001898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:04.001906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:04.002071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:04.002228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:04.002234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:04.002240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:04.002247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:04.013955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:04.014421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.150 [2024-11-20 15:40:04.014437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.150 [2024-11-20 15:40:04.014442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.150 [2024-11-20 15:40:04.014591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.150 [2024-11-20 15:40:04.014740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.150 [2024-11-20 15:40:04.014746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.150 [2024-11-20 15:40:04.014751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.150 [2024-11-20 15:40:04.014756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.150 [2024-11-20 15:40:04.026594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.150 [2024-11-20 15:40:04.027040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.027053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.027058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.027210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.027364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.027369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.027374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.027379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.151 [2024-11-20 15:40:04.039212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.151 [2024-11-20 15:40:04.039683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.039695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.039700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.039849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.039997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.040002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.040007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.040012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.151 [2024-11-20 15:40:04.051850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.151 [2024-11-20 15:40:04.052425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.052455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.052464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.052629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.052781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.052787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.052792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.052798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.151 [2024-11-20 15:40:04.064516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.151 [2024-11-20 15:40:04.064964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.064979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.064985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.065134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.065290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.065296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.065305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.065310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.151 [2024-11-20 15:40:04.077155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.151 [2024-11-20 15:40:04.077624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.077636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.077642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.077790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.077938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.077944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.077950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.077955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.151 [2024-11-20 15:40:04.089806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.151 [2024-11-20 15:40:04.090265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.090296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.090305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.090472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.090624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.090630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.090635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.090641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.151 [2024-11-20 15:40:04.102500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.151 [2024-11-20 15:40:04.103063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.151 [2024-11-20 15:40:04.103093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.151 [2024-11-20 15:40:04.103102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.151 [2024-11-20 15:40:04.103273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.151 [2024-11-20 15:40:04.103425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.151 [2024-11-20 15:40:04.103431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.151 [2024-11-20 15:40:04.103437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.151 [2024-11-20 15:40:04.103442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.115173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.115742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.115772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.115781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.115946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.116097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.116104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.116109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.116115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.127831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.128459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.128490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.128498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.128662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.128814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.128820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.128826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.128832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.140401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.140929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.140944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.140949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.141098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.141251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.141257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.141262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.141268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.152965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.153410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.153423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.153433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.153582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.153730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.153735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.153740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.153745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.165588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.166034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.166046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.166051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.166204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.166353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.166358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.166363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.166368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.178209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.178770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.178800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.178809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.178973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.179124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.179131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.179136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.179142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.190850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.191480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.191511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.191520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.191684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.191839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.191846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.191851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.191857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.203433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.203834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.203865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.203874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.204038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.204204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.454 [2024-11-20 15:40:04.204212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.454 [2024-11-20 15:40:04.204217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.454 [2024-11-20 15:40:04.204223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.454 [2024-11-20 15:40:04.216075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.454 [2024-11-20 15:40:04.216700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.454 [2024-11-20 15:40:04.216731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.454 [2024-11-20 15:40:04.216739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.454 [2024-11-20 15:40:04.216904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.454 [2024-11-20 15:40:04.217056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.217062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.217068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.217073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.228785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.229271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.229301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.229310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.229476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.229627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.229634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.229643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.229649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.241366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.241849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.241865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.241871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.242020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.242174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.242180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.242185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.242190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.254035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.254540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.254553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.254559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.254707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.254856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.254870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.254875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.254880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.266733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.267182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.267197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.267203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.267352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.267501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.267506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.267512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.267518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.279367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.279964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.279995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.280004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.280175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.280327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.280334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.280340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.280346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.292055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.292529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.292560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.292569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.292733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.292885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.292892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.292897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.292903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.304761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.305258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.305288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.305297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.305464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.305616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.305623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.305629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.305635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.317389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.317998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.318029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.318041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.318213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.318365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.318372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.318377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.318384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.330093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.330574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.330590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.330596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.330745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.455 [2024-11-20 15:40:04.330894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.455 [2024-11-20 15:40:04.330900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.455 [2024-11-20 15:40:04.330905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.455 [2024-11-20 15:40:04.330911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.455 [2024-11-20 15:40:04.342756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.455 [2024-11-20 15:40:04.343202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.455 [2024-11-20 15:40:04.343233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.455 [2024-11-20 15:40:04.343242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.455 [2024-11-20 15:40:04.343409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.456 [2024-11-20 15:40:04.343561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.456 [2024-11-20 15:40:04.343567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.456 [2024-11-20 15:40:04.343572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.456 [2024-11-20 15:40:04.343578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.456 [2024-11-20 15:40:04.355431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.456 [2024-11-20 15:40:04.355981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.456 [2024-11-20 15:40:04.356011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.456 [2024-11-20 15:40:04.356020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.456 [2024-11-20 15:40:04.356192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.456 [2024-11-20 15:40:04.356348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.456 [2024-11-20 15:40:04.356355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.456 [2024-11-20 15:40:04.356360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.456 [2024-11-20 15:40:04.356366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.456 [2024-11-20 15:40:04.368077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.456 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.456 [2024-11-20 15:40:04.368629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.456 [2024-11-20 15:40:04.368659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.456 [2024-11-20 15:40:04.368668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.456 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:15.456 [2024-11-20 15:40:04.368833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.456 [2024-11-20 15:40:04.368984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.456 [2024-11-20 15:40:04.368991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.456 [2024-11-20 15:40:04.368999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.456 [2024-11-20 15:40:04.369006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.456 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.456 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.456 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.456 [2024-11-20 15:40:04.380729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.456 [2024-11-20 15:40:04.381056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.456 [2024-11-20 15:40:04.381073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.456 [2024-11-20 15:40:04.381079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.456 [2024-11-20 15:40:04.381233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.456 [2024-11-20 15:40:04.381384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.456 [2024-11-20 15:40:04.381391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.456 [2024-11-20 15:40:04.381396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.456 [2024-11-20 15:40:04.381401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.456 [2024-11-20 15:40:04.393397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.456 [2024-11-20 15:40:04.393856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.456 [2024-11-20 15:40:04.393869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.456 [2024-11-20 15:40:04.393874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.456 [2024-11-20 15:40:04.394026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.456 [2024-11-20 15:40:04.394179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.456 [2024-11-20 15:40:04.394186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.456 [2024-11-20 15:40:04.394190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.456 [2024-11-20 15:40:04.394195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 [2024-11-20 15:40:04.406051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.406594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.722 [2024-11-20 15:40:04.406625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.722 [2024-11-20 15:40:04.406634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.722 [2024-11-20 15:40:04.406798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.722 [2024-11-20 15:40:04.406950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.722 [2024-11-20 15:40:04.406957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.722 [2024-11-20 15:40:04.406962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.722 [2024-11-20 15:40:04.406968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.722 [2024-11-20 15:40:04.415433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.722 [2024-11-20 15:40:04.418680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.419137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.722 [2024-11-20 15:40:04.419152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.722 [2024-11-20 15:40:04.419162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.722 [2024-11-20 15:40:04.419312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.722 [2024-11-20 15:40:04.419461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.722 [2024-11-20 15:40:04.419467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.722 [2024-11-20 15:40:04.419472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.722 [2024-11-20 15:40:04.419476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.722 [2024-11-20 15:40:04.431320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.431674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.722 [2024-11-20 15:40:04.431686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.722 [2024-11-20 15:40:04.431692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.722 [2024-11-20 15:40:04.431839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.722 [2024-11-20 15:40:04.431987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.722 [2024-11-20 15:40:04.431993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.722 [2024-11-20 15:40:04.431998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.722 [2024-11-20 15:40:04.432003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 [2024-11-20 15:40:04.443979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.444523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.722 [2024-11-20 15:40:04.444553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.722 [2024-11-20 15:40:04.444562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.722 [2024-11-20 15:40:04.444727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.722 [2024-11-20 15:40:04.444879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.722 [2024-11-20 15:40:04.444885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.722 [2024-11-20 15:40:04.444890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.722 [2024-11-20 15:40:04.444896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 Malloc0 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.722 [2024-11-20 15:40:04.456607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.457073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.722 [2024-11-20 15:40:04.457088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.722 [2024-11-20 15:40:04.457093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.722 [2024-11-20 15:40:04.457247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.722 [2024-11-20 15:40:04.457396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.722 [2024-11-20 15:40:04.457402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.722 [2024-11-20 15:40:04.457407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.722 [2024-11-20 15:40:04.457416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.722 [2024-11-20 15:40:04.469268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.469792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.722 [2024-11-20 15:40:04.469823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf000 with addr=10.0.0.2, port=4420 00:30:15.722 [2024-11-20 15:40:04.469831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf000 is same with the state(6) to be set 00:30:15.722 [2024-11-20 15:40:04.469996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf000 (9): Bad file descriptor 00:30:15.722 [2024-11-20 15:40:04.470148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:15.722 [2024-11-20 15:40:04.470154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:15.722 [2024-11-20 15:40:04.470166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:15.722 [2024-11-20 15:40:04.470172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.722 [2024-11-20 15:40:04.481878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:15.722 [2024-11-20 15:40:04.481910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.722 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.723 15:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 785572 00:30:15.723 [2024-11-20 15:40:04.510910] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:17.233 4964.71 IOPS, 19.39 MiB/s [2024-11-20T14:40:07.133Z] 5956.62 IOPS, 23.27 MiB/s [2024-11-20T14:40:08.073Z] 6745.67 IOPS, 26.35 MiB/s [2024-11-20T14:40:09.011Z] 7365.60 IOPS, 28.77 MiB/s [2024-11-20T14:40:10.394Z] 7862.36 IOPS, 30.71 MiB/s [2024-11-20T14:40:11.333Z] 8268.75 IOPS, 32.30 MiB/s [2024-11-20T14:40:12.273Z] 8621.92 IOPS, 33.68 MiB/s [2024-11-20T14:40:13.213Z] 8927.64 IOPS, 34.87 MiB/s [2024-11-20T14:40:13.213Z] 9205.33 IOPS, 35.96 MiB/s 00:30:24.253 Latency(us) 00:30:24.253 [2024-11-20T14:40:13.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:24.253 Verification LBA range: start 0x0 length 0x4000 00:30:24.253 Nvme1n1 : 15.01 9202.07 35.95 13823.40 0.00 5540.10 546.13 14199.47 00:30:24.253 [2024-11-20T14:40:13.213Z] =================================================================================================================== 00:30:24.253 [2024-11-20T14:40:13.213Z] Total : 9202.07 35.95 13823.40 0.00 5540.10 546.13 14199.47 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.253 rmmod nvme_tcp 00:30:24.253 rmmod nvme_fabrics 00:30:24.253 rmmod nvme_keyring 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 786793 ']' 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 786793 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 786793 ']' 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 786793 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.253 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786793 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786793' 00:30:24.513 killing process with pid 786793 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 786793 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 786793 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.513 15:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.057 00:30:27.057 real 0m28.521s 00:30:27.057 user 1m3.984s 00:30:27.057 sys 0m7.818s 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.057 ************************************ 00:30:27.057 END TEST nvmf_bdevperf 00:30:27.057 ************************************ 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.057 ************************************ 00:30:27.057 START TEST nvmf_target_disconnect 00:30:27.057 ************************************ 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:27.057 * Looking for test storage... 00:30:27.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.057 --rc genhtml_branch_coverage=1 00:30:27.057 --rc genhtml_function_coverage=1 00:30:27.057 --rc genhtml_legend=1 00:30:27.057 --rc geninfo_all_blocks=1 00:30:27.057 --rc geninfo_unexecuted_blocks=1 00:30:27.057 00:30:27.057 ' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.057 --rc genhtml_branch_coverage=1 00:30:27.057 --rc genhtml_function_coverage=1 00:30:27.057 --rc genhtml_legend=1 00:30:27.057 --rc geninfo_all_blocks=1 00:30:27.057 --rc geninfo_unexecuted_blocks=1 00:30:27.057 00:30:27.057 ' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.057 --rc genhtml_branch_coverage=1 00:30:27.057 --rc genhtml_function_coverage=1 00:30:27.057 --rc genhtml_legend=1 00:30:27.057 --rc geninfo_all_blocks=1 00:30:27.057 --rc geninfo_unexecuted_blocks=1 00:30:27.057 00:30:27.057 ' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.057 --rc genhtml_branch_coverage=1 00:30:27.057 --rc genhtml_function_coverage=1 00:30:27.057 --rc genhtml_legend=1 00:30:27.057 --rc geninfo_all_blocks=1 00:30:27.057 --rc geninfo_unexecuted_blocks=1 00:30:27.057 00:30:27.057 ' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.057 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.058 15:40:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:35.203 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:35.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.203 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:35.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:35.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.204 15:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:30:35.204 00:30:35.204 --- 10.0.0.2 ping statistics --- 00:30:35.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.204 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:30:35.204 00:30:35.204 --- 10.0.0.1 ping statistics --- 00:30:35.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.204 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:35.204 ************************************ 00:30:35.204 START TEST nvmf_target_disconnect_tc1 00:30:35.204 ************************************ 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:35.204 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.205 [2024-11-20 15:40:23.474627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.205 [2024-11-20 15:40:23.474734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2421ad0 with addr=10.0.0.2, port=4420 00:30:35.205 [2024-11-20 15:40:23.474763] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:35.205 [2024-11-20 15:40:23.474777] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:35.205 [2024-11-20 15:40:23.474786] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:35.205 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:35.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:35.205 Initializing NVMe Controllers 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:35.205 00:30:35.205 real 0m0.147s 00:30:35.205 user 0m0.072s 00:30:35.205 sys 0m0.074s 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.205 ************************************ 00:30:35.205 END TEST nvmf_target_disconnect_tc1 00:30:35.205 ************************************ 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:35.205 ************************************ 00:30:35.205 START TEST nvmf_target_disconnect_tc2 00:30:35.205 ************************************ 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=792951 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 792951 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 792951 ']' 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.205 15:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.205 [2024-11-20 15:40:23.636925] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:35.205 [2024-11-20 15:40:23.636983] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.205 [2024-11-20 15:40:23.736827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.205 [2024-11-20 15:40:23.789121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.205 [2024-11-20 15:40:23.789176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.205 [2024-11-20 15:40:23.789184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.205 [2024-11-20 15:40:23.789192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.205 [2024-11-20 15:40:23.789198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.205 [2024-11-20 15:40:23.791365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:35.205 [2024-11-20 15:40:23.791524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:35.205 [2024-11-20 15:40:23.791699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:35.205 [2024-11-20 15:40:23.791717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 Malloc0 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 [2024-11-20 15:40:24.561295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 [2024-11-20 15:40:24.601755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=793010 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:35.778 15:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.694 15:40:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 792951 00:30:37.694 15:40:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 [2024-11-20 15:40:26.642358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Read completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 Write completed with error (sct=0, sc=8) 00:30:37.694 starting I/O failed 00:30:37.694 [2024-11-20 15:40:26.642761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.694 [2024-11-20 15:40:26.643180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.694 [2024-11-20 15:40:26.643206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.694 qpair failed and we were unable to recover it. 00:30:37.694 [2024-11-20 15:40:26.643649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.694 [2024-11-20 15:40:26.643712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.694 qpair failed and we were unable to recover it. 00:30:37.694 [2024-11-20 15:40:26.644009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.694 [2024-11-20 15:40:26.644026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.694 qpair failed and we were unable to recover it. 00:30:37.694 [2024-11-20 15:40:26.644469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.694 [2024-11-20 15:40:26.644532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.694 qpair failed and we were unable to recover it. 00:30:37.694 [2024-11-20 15:40:26.644900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.694 [2024-11-20 15:40:26.644914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.694 qpair failed and we were unable to recover it. 00:30:37.694 [2024-11-20 15:40:26.645410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.645472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.646180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.646194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.646489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.646501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.646823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.646836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.647147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.647168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.647566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.647578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.647925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.648280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.648293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.648631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.648643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.649011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.649032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.649318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.649330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.649445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.649458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.649639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.649652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.649996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.650007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.650359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.650373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.650698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.650709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.651017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.651030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.651289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.651302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.651625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.651637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.651946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.651957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.695 [2024-11-20 15:40:26.652271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.695 [2024-11-20 15:40:26.652283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.695 qpair failed and we were unable to recover it. 00:30:37.969 [2024-11-20 15:40:26.652610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.969 [2024-11-20 15:40:26.652625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.652926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.653235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.653249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.653559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.653570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.653882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.653894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.654216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.654228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.654581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.654593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.654958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.654969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.655181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.655193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.655534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.655545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.655761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.655773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.656102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.656115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.656488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.656502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.656846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.656857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.657199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.657211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.657557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.657569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.657897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.657909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.658077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.658089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.658402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.658416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.658727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.658739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.659039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.659051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.659355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.659366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.659666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.659678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.660034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.660045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.660403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.660417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.660767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.660779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.661130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.661141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.661351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.661363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.661714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.661726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.662027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.662041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.662382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.662395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.662702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.662714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.663047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.663060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.663253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.663269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.663608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.663622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.663927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.663941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.664297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.664312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.970 [2024-11-20 15:40:26.664640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.970 [2024-11-20 15:40:26.664654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.970 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.665056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.665071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.665276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.665290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.665593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.665606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.665946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.665958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.666206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.666219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.666567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.666580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.666898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.666912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.667104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.667118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.667491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.667506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.667815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.667828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.668145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.668157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.668421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.668434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.668761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.668773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.669101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.669115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.669413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.669427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.669786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.669800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.670146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.670166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.670495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.670509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.670837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.670853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.671178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.671192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.671512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.671525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.671838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.671852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.672171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.672193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.672612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.672805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.672819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.673189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.673202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.673541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.673554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.673890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.673903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.674274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.674287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.674653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.674666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.674964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.674978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.675304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.675319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.675643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.675660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.676040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.676057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.676391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.676408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.676774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.676788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.677144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.677168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.971 [2024-11-20 15:40:26.677540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.971 [2024-11-20 15:40:26.677554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.971 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.677873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.677888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.678259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.678275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.678710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.678725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.679095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.679109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.679439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.679455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.679823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.679838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.680171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.680195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.680507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.680522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.680867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.680883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.681204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.681220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.681452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.681766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.681780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.681987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.682002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.682317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.682332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.682595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.682611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.682835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.682851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.683177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.683193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.683538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.683554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.683886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.683900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.684241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.684258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.684801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.684816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.685120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.685141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.685449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.685465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.685864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.685878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.686199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.686215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.686544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.686559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.686879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.686896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.687215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.687231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.687568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.687583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.687908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.687922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.688175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.688198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.688527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.688542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.688937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.688953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.689328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.689346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.689707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.689723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.690093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.690109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.690445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.690460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.972 qpair failed and we were unable to recover it. 00:30:37.972 [2024-11-20 15:40:26.690771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.972 [2024-11-20 15:40:26.690785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.691111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.691125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.691470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.691487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.691812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.691829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.692177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.692202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.692524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.692539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.692874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.692889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.693223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.693238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.693576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.693591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.693911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.693927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.694324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.694339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.694684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.694699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.695024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.695038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.695450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.695465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.695793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.695808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.696154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.696192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.696525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.696540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.696934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.696948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.697258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.697274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.697642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.697656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.697971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.697988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.698326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.698342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.698667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.698683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.699026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.699040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.699380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.699395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.699723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.699738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.700055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.700070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.700313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.700329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.700654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.700669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.700993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.701007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.701299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.701315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.701690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.701704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.702037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.702053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.702394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.702409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.702754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.702768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.703105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.703120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.703460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.703477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.703808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.703823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.973 [2024-11-20 15:40:26.704177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.973 [2024-11-20 15:40:26.704201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.973 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.704514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.704529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.704867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.704883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.705222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.705237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.705571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.705585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.705923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.705938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.706263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.706279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.706632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.706647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.706985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.707002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.707342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.707357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.707667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.707681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.708028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.708043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.708390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.708407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.708757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.708771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.709100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.709119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.709461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.709476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.709870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.709886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.710265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.710280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.710627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.710642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.710953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.710968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.711307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.711322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.711671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.711687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.711875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.711893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.712478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.712495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.712827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.712843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.713182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.713198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.713545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.713559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.713916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.713932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.714271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.714287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.714616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.714632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.714984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.714998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.974 [2024-11-20 15:40:26.715298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.974 [2024-11-20 15:40:26.715313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.974 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.715673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.715687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.716084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.716099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.716441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.716458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.716799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.716815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.717124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.717139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.717406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.717421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.717777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.717791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.718145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.718169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.718548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.718562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.718906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.718922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.719261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.719277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.719577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.719593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.719801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.719817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.720115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.720130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.720501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.720517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.720878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.720895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.721203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.721220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.721557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.721573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.721892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.721907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.722231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.722246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.722587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.722603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.722937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.722953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.723316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.723332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.723704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.723720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.724060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.724074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.724422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.724446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.724753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.724768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.725097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.725114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.725334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.725351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.725692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.725708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.726039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.726054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.726375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.726390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.726731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.726746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.727077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.727093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.727403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.727419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.727764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.727779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.728021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.728037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.728383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.975 [2024-11-20 15:40:26.728399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.975 qpair failed and we were unable to recover it. 00:30:37.975 [2024-11-20 15:40:26.728641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.728655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.728977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.728993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.729341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.729356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.729690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.729706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.730027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.730043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.730362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.730378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.730714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.730729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.731137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.731152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.731519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.731534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.731862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.731880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.732210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.732226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.732629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.732647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.732981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.733344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.733360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.733654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.733668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.734002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.734019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.734323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.734339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.734732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.734746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.735110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.735125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.735468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.735483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.735834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.735849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.736196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.736214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.736545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.736561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.736884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.737087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.737103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.737396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.737412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.737768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.737783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.738121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.738475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.738490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.738880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.738898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.739230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.739246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.739593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.739608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.739942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.739957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.740293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.740311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.740638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.740653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.740995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.741011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.741335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.741351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.741745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.976 [2024-11-20 15:40:26.741759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.976 qpair failed and we were unable to recover it. 00:30:37.976 [2024-11-20 15:40:26.742150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.742173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.742481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.742496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.742836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.742851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.743198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.743216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.743547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.743564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.743886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.743903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.744256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.744272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.744665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.744680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.744988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.745004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.745326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.745341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.745672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.745689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.746009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.746023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.746253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.746268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.746666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.746680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.747006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.747026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.747382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.747398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.747721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.747736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.748089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.748106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.748513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.748529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.748859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.748875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.749213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.749229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.749595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.749610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.749939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.749954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.750289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.750306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.750660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.750676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.750894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.750908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.751251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.751267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.751594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.751610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.751950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.751964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.752295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.752312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.752665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.752680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.753029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.753383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.753398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.753769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.753783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.754144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.754490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.754505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.754847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.754863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.755100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.977 [2024-11-20 15:40:26.755114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.977 qpair failed and we were unable to recover it. 00:30:37.977 [2024-11-20 15:40:26.755420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.755436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.755765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.755779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.756117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.756133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.756356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.756379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.756706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.756723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.757043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.757059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.757447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.757464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.757794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.757809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.758170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.758186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.758517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.758532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.758749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.758763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.759104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.759119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.759456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.759474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.759825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.759840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.760176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.760201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.760543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.760557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.760878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.760893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.761219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.761236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.761576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.761592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.761974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.761989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.762319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.762518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.762534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.762933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.762949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.763303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.763319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.763649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.763665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.763988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.764004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.764339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.764356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.764694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.764708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.765011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.765026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.765349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.765364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.765692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.765708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.766064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.766080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.766401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.766419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.766744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.767148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.767189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.767521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.767536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.767723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.767740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.768076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.768094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.978 [2024-11-20 15:40:26.768430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.978 [2024-11-20 15:40:26.768446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.978 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.768743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.768759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.769101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.769117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.769536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.769553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.769889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.769905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.770240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.770256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.770493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.770512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.770858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.770873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.771212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.771229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.771530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.771545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.771880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.771895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.772286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.772302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.772645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.772661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.773051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.773067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.773393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.773409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.773736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.773752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.774114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.774128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.774526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.774541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.774897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.774913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.775254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.775270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.775696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.775710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.776079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.776094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.776412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.776428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.776776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.776790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.777133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.777149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.777376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.777392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.777748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.777763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.778097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.778113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.778459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.778475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.778871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.778888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.779214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.779229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.779561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.779576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.779918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.779933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.979 [2024-11-20 15:40:26.780269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.979 [2024-11-20 15:40:26.780291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.979 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.780614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.780628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.781025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.781363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.781378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.781625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.781640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.781985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.782354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.782370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.782668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.782687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.783017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.783033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.783218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.783235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.783624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.783641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.783986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.784004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.784336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.784353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.784692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.784707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.785105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.785119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.785453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.785469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.785793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.785809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.786153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.786179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.786412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.786427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.786764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.786781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.787093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.787112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.787456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.787473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.787813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.787828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.788047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.788062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.788369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.788387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.788745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.789100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.789117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.789489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.789506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.789844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.789861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.790085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.790101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.790391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.790408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.790768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.790782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.791137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.791152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.791497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.791840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.791856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.792209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.792561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.792577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.792994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.793379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.980 [2024-11-20 15:40:26.793395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.980 qpair failed and we were unable to recover it. 00:30:37.980 [2024-11-20 15:40:26.793738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.793753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.794113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.794127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.794437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.794458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.794792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.794807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.795136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.795151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.795514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.795529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.795854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.795870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.796241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.796258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.796611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.796626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.796962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.796977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.797212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.797227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.797613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.797627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.797971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.797986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.798323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.798339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.798729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.798744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.799093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.799108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.799454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.799470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.799766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.799782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.800128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.800142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.800487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.800505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.800831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.800845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.801191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.801208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.801523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.801541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.801878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.801896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.802246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.802263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.802643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.802659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.802988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.803004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.803307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.803325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.803649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.803664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.804054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.804075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.804453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.804470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.804804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.804821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.805155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.805182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.805528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.805545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.805875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.805891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.806232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.806249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.806586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.806604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.806940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.806955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.981 qpair failed and we were unable to recover it. 00:30:37.981 [2024-11-20 15:40:26.807261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.981 [2024-11-20 15:40:26.807279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.807627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.807643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.807985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.808002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.808331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.808347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.808577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.808594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.808935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.809236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.809252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.809605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.809620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.809945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.809962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.810314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.810331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.810913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.810928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.811341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.811356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.811706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.811721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.812045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.812061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.812387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.812403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.812742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.813078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.813094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.813333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.813348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.813663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.813679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.814029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.814046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.814457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.814474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.814847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.814864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.815199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.815216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.815888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.815914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.816255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.816275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.816582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.816598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.816933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.816958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.817280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.817296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.817629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.817645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.817966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.817981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.818337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.818354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.818679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.818693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.819025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.819046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.819278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.819294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.819640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.819655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.819946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.819961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.820175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.820200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.820539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.820553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.982 [2024-11-20 15:40:26.820842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.982 [2024-11-20 15:40:26.820866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.982 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.821183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.821516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.821532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.821862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.821877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.822180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.822195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.822422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.822437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.822778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.822793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.823032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.823048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.823388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.823404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.823745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.823761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.824114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.824129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.824511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.824527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.824869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.824885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.825186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.825204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.825522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.825537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.825844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.825859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.826214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.826229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.826563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.826577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.826908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.826923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.827259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.827274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.827492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.827507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.827856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.827872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.828224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.828242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.828581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.828596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.828913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.828928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.829241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.829257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.829590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.829605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.829922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.829938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.830237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.830253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.830644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.830659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.830973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.830988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.831334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.831552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.831567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.831903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.831917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.983 [2024-11-20 15:40:26.832242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.983 [2024-11-20 15:40:26.832259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.983 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.832584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.832600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.832931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.832946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.833280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.833622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.833636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.833965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.833980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.834339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.834356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.834655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.834672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.834999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.835014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.835315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.835331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.835680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.835695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.836020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.836036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.836345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.836362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.836621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.836636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.836984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.837001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.837346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.837362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.837573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.837588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.837926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.837941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.838241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.838257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.838491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.838507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.838809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.838823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.839124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.839140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.839485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.839502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.839845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.839859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.840214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.840231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.840602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.840616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.840828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.840843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.841040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.841059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.841364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.841384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.841713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.841728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.842050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.842064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.842396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.842418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.842743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.842758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.843070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.843086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.843435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.843450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.843780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.843796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.844110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.844124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.844449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.844467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.984 [2024-11-20 15:40:26.844816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.984 [2024-11-20 15:40:26.844831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.984 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.845219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.845236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.845590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.845605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.845945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.845962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.846311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.846327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.846664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.846680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.846999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.847014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.847336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.847352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.847692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.847706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.848060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.848076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.848384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.848401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.848644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.848659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.848918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.848933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.849146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.849504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.849519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.849848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.849865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.850209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.850227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.850566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.850581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.850909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.850933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.851263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.851278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.851610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.851626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.851969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.852276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.852625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.852641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.852967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.852991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.853307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.853322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.853654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.853670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.853896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.853910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.854286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.854302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.854650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.854666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.854998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.855013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.855241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.855260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.855475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.855490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.855865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.855880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.856273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.856289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.856642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.856656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.856979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.856995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.857209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.857225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.857564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.985 [2024-11-20 15:40:26.857579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.985 qpair failed and we were unable to recover it. 00:30:37.985 [2024-11-20 15:40:26.857918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.857934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.858284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.858300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.858604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.858619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.858942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.858957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.859328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.859345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.859699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.859714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.860050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.860065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.860384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.860400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.860749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.860763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.861135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.861150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.861475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.861491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.861844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.861859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.862270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.862286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.862601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.862616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.862996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.863010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.863219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.863236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.863597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.863611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.863936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.863955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.864292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.864309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.864526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.864545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.864812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.864827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.865144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.865174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.865522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.865537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.865899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.865915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.866250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.866267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.866592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.866607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.866941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.866965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.867295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.867310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.867707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.867721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.868067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.868084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.868428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.868445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.868754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.868769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.869097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.869111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.869444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.869460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.869645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.869662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.870012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.870028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.870386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.870402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.870628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.870643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.986 qpair failed and we were unable to recover it. 00:30:37.986 [2024-11-20 15:40:26.870982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.986 [2024-11-20 15:40:26.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.871383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.871400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.871747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.871762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.872091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.872107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.872445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.872461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.872690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.873025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.873039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.873341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.873357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.873615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.873630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.873976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.873992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.874333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.874348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.874666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.874682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.874887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.874905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.875239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.875255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.875611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.875626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.875942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.875956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.876282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.876298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.876696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.876711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.877059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.877075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.877416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.877432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.877759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.877780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.878115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.878131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.878455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.878474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.878807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.878823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.879171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.879189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.879516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.879531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.879877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.880135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.880149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.880544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.880559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.880885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.880899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.881278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.881294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.881625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.881642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.881995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.882009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.882343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.882358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.882760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.882774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.883078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.883092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.883488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.883504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.883842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.883860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.987 [2024-11-20 15:40:26.884074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.987 [2024-11-20 15:40:26.884089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.987 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.884308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.884325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.884662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.884677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.884978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.884994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.885333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.885348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.885681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.885697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.886062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.886079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.886417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.886435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.886781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.886797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.887146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.887171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.887517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.887928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.887950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.888284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.888300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.888632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.888648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.888984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.889000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.889214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.889231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.889729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.889745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.890081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.890100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.890455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.890783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.890800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.891144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.891167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.891513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.891528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.891847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.891862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.892202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.892219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.892531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.892545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.892883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.892900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.893298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.893315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.893618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.893633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.893965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.893981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.894321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.894337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.894686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.894701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.895045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.895062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.895378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.895396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.895587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.988 [2024-11-20 15:40:26.895603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.988 qpair failed and we were unable to recover it. 00:30:37.988 [2024-11-20 15:40:26.895907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.895922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.896307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.896324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.896637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.896652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.896966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.897122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.897138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.897511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.897528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.897752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.897769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.898100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.898115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.898393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.898755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.898770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.899002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.899018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.899392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.899407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.899760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.899776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.900017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.900032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.900370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.900387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.900730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.900745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.901060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.901075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.901397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.901413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.901729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.901747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.902088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.902104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.902353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.902369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.902768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.902782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.903097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.903120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.903457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.903473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.903833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.903850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.904197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.904215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.904573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.904588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.904798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.904812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.905077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.905093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.905395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.905411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.905756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.905771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.906092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.906108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.906520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.906537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.906867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.906882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.907247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.907263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.907501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.907516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.907859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.907873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.908217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.908234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.989 qpair failed and we were unable to recover it. 00:30:37.989 [2024-11-20 15:40:26.908550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.989 [2024-11-20 15:40:26.908564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.908807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.908823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.909216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.909232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.909575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.909591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.909832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.909847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.910062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.910077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.910396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.910413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.910617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.910637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.910996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.911013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.911368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.911384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.911771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.911786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.912011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.912026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.912376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.912392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.912731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.912746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.913067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.913084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.913402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.913418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.913740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.913755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.914051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.914068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.914454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.914470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.914717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.914732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.914960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.914975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.915270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.915288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.915525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.915542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.915767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.915782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.916115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.990 [2024-11-20 15:40:26.916130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:37.990 qpair failed and we were unable to recover it. 00:30:37.990 [2024-11-20 15:40:26.916385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.916403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.916751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.916769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.917005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.917020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.917258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.917277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.917612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.917627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.917911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.918255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.918272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.918563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.918579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.918925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.918942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.919201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.919216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.919473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.919488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.919838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.919853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.920189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.920207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.920538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.920554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.920798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.920813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.921039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.921053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.921357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.921373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.921695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.921712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.922083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.922099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.922353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.922372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.922713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.922730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.923083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.923099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.923438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.923453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.266 [2024-11-20 15:40:26.923805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.266 [2024-11-20 15:40:26.923825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.266 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.924135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.924154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.924501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.924517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.924858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.924873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.925226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.925244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.925552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.925566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.925765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.925781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.926012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.926026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.926335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.926352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.926703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.926718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.927056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.927072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.927393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.927409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.927737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.927752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.928116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.928131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.928500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.928516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.928723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.928737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.928944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.928958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.929311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.929328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.929694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.929710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.930063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.930080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.930422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.930439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.930771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.930786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.931139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.931156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.931465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.931481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.931666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.931683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.931896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.931912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.932253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.932271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.932595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.932611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.932951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.932966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.933310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.933327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.933675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.933691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.934046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.934061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.934413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.934430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.934723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.934740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.934948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.934964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.935311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.935327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.935677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.935693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.936029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.936045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.267 [2024-11-20 15:40:26.936397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.267 [2024-11-20 15:40:26.936413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.267 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.936671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.936686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.937050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.937065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.937330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.937347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.937753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.937768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.938109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.938126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.938463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.938482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.938818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.938834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.939156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.939182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.939509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.939525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.939864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.939880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.940209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.940235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.940634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.940649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.941002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.941016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.941383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.941400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.941758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.941774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.942099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.942115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.942475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.942493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.942827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.942843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.943178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.943194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.943585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.943602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.943932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.943948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.944173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.944191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.944403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.944420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.944740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.944755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.945101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.945117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.945468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.945486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.945712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.945727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.946062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.946076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.946411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.946428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.946770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.946789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.947025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.947039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.947385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.947402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.947744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.947759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.948068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.948082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.948391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.948406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.948732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.948747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.949098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.949115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.949338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.268 [2024-11-20 15:40:26.949356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.268 qpair failed and we were unable to recover it. 00:30:38.268 [2024-11-20 15:40:26.949709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.949724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.949949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.949965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.950318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.950333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.950575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.950590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.950877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.950891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.951264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.951282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.951634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.951650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.952110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.952127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.952431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.952446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.952764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.952779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.952990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.953007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.953302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.953318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.953537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.953553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.953800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.953816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.954125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.954141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.954489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.954849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.954864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.955197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.955213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.955552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.955567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.955797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.955813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.956025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.956352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.956367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.956761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.956777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.957112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.957128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.957461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.957479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.957788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.957802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.958153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.958186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.958505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.958521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.958863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.959234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.959251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.959459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.959473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.959782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.959796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.960140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.960169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.960533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.960551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.960888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.960902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.961247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.961263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.961497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.961512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.961856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.961871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.269 qpair failed and we were unable to recover it. 00:30:38.269 [2024-11-20 15:40:26.962227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.269 [2024-11-20 15:40:26.962242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.962539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.962555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.962920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.962936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.963262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.963278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.963512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.963526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.963765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.963780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.964132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.964146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.964353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.964369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.964628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.964643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.964981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.964997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.965202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.965219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.965584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.965599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.965968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.965984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.966311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.966328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.966667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.966684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.966886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.966902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.967169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.967187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.967517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.967534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.967887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.967902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.968240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.968257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.968652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.968668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.969060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.969080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.969407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.969425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.969635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.969651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.969871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.969887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.970187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.970203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.970549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.970564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.970926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.970942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.971275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.971291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.971603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.971619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.972031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.972046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.972383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.972400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.972588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.972605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.270 [2024-11-20 15:40:26.972824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.270 [2024-11-20 15:40:26.972842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.270 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.973073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.973088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.973413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.973430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.973777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.973794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.974153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.974205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.974506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.974521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.974893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.974909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.975259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.975275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.975704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.975719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.976085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.976101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.976466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.976484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.976798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.976813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.977078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.977093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.977437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.977453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.977680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.977695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.977939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.977957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.978225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.978242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.978579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.978595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.978933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.978948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.979306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.979322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.979655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.979670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.980013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.980027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.980453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.980468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.980797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.980813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.981134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.981152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.981561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.981576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.981928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.981945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.982293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.982308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.982627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.982643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.983054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.983074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.983421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.983437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.983638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.983654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.983993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.984008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.984222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.984238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.984602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.984617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.984940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.984957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.985202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.271 [2024-11-20 15:40:26.985218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.271 qpair failed and we were unable to recover it. 00:30:38.271 [2024-11-20 15:40:26.985572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.985587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.985888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.985903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.986233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.986249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.986616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.986632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.986974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.986992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.987209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.987224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.987454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.987469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.987819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.987835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.988183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.988201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.988558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.988573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.988918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.988935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.989192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.989208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.989554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.989571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.989894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.989909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.990147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.990179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.990580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.990595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.990919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.990934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.991275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.991291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.991637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.991652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.991982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.992002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.992322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.992338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.992639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.992654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.992988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.993004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.993343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.993359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.993691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.993706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.994061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.994077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.994405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.994421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.994782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.994798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.995129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.995470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.995485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.995713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.995729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.996067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.996085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.996404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.996419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.996722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.996737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.997097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.997113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.997467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.997483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.997800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.997816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.998133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.998149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.272 qpair failed and we were unable to recover it. 00:30:38.272 [2024-11-20 15:40:26.998487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.272 [2024-11-20 15:40:26.998503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:26.998854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:26.998871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:26.999214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:26.999230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:26.999581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:26.999599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:26.999815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:26.999832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.000177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.000195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.000535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.000550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.000945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.000963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.001296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.001313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.001683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.001698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.002035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.002050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.002400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.002415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.002758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.002772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.003115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.003133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.003483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.003501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.003823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.003839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.004176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.004192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.004469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.004484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.004822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.004837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.005152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.005184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.005515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.005531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.005766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.005781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.006125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.006144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.006507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.006523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.006861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.006876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.007112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.007128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.007516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.007532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.007747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.007764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.008082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.008098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.008436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.008453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.008791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.008806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.009155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.009191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.009506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.009521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.009851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.010199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.010216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.010514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.010529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.010795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.010810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.011145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.011168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.273 [2024-11-20 15:40:27.012101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.273 [2024-11-20 15:40:27.012136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.273 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.012553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.012571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.012864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.012879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.013127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.013144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.013483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.013499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.013754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.013769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.014002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.014016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.014389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.014405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.014721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.014737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.015067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.015082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.015459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.015476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.015807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.015826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.016058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.016073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.016423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.016439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.016809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.016825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.017168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.017194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.017377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.017393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.017743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.017758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.018071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.018086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.018452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.018657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.018673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.019015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.019358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.019374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.019733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.019749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.020133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.020149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.020516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.020533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.020746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.020762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.021138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.021155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.021529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.021547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.021873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.021890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.022222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.022238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.022574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.022589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.022950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.022965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.023260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.023277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.023629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.023644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.023981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.023996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.024227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.024242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.024589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.024603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.024950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.274 [2024-11-20 15:40:27.024965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.274 qpair failed and we were unable to recover it. 00:30:38.274 [2024-11-20 15:40:27.025275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.025291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.025647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.025662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.025887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.025901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.026229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.026247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.026539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.026555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.026892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.026909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.027223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.027240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.027592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.027608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.027941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.027957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.028299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.028316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.028659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.028679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.028903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.028919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.029275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.029292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.029636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.029666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.029997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.030011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.030327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.030343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.030642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.030657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.030966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.030991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.031324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.031339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.031675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.031689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.032023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.032038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.032398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.032741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.032758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.033106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.033122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.034804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.034852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.035222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.035242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.035541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.035557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.035911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.035926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.036274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.036290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.036619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.036634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.036980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.036996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.037315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.037333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.037678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.037694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.038016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.275 [2024-11-20 15:40:27.038030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.275 qpair failed and we were unable to recover it. 00:30:38.275 [2024-11-20 15:40:27.038331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.038347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.038537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.038554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.038891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.038907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.039228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.039245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.039582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.039597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.039823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.039838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.040195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.040211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.040550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.040566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.040806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.040821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.041129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.041145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.041501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.041518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.041858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.041872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.042218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.042234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.042562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.042577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.042920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.042935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.043226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.043242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.043574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.043591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.043991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.044318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.044712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.044731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.045064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.045079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.045402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.045418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.045755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.045770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.046093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.046111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.046468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.046485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.046824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.046840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.047061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.047077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.047423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.047439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.047813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.047829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.048175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.048193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.048533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.048549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.048805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.048820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.049179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.049203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.049406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.049421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.049802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.049817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.050138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.050156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.050431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.050447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.050765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.276 [2024-11-20 15:40:27.050782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.276 qpair failed and we were unable to recover it. 00:30:38.276 [2024-11-20 15:40:27.051100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.051115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.051485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.051796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.051811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.052104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.052126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.052243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.052623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.052639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.052939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.052956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.053219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.053235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.053545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.053561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.053906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.053925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.054193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.054209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.054565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.054579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.054887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.055212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.055229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.055444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.055459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.055807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.055821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.056169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.056186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.056533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.056548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.056769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.056784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.057084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.057099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.057446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.057464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.057805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.057820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.058153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.058187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.058538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.058553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.058854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.058875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.059099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.059113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.059493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.059510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.059827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.059844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.060142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.060157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.060513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.060527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.060828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.060844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.061155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.061186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.061552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.061567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.061891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.061907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.062004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.062018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.062585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.062711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.063013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.063050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.063552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.063659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-11-20 15:40:27.064096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.277 [2024-11-20 15:40:27.064133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.064428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.064460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.064821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.064854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.065234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.065290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.065660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.065689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.066061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.066099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.066484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.066515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.066884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.066913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.067282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.067315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.067664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.067693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.068069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.068098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.068523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.068554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.068904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.068935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.069308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.069339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.069704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.069734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.069995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.070023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.070273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.070304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.070660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.070689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.071063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.071093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.071452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.071482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.071840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.071868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.072221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.072251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.072626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.072655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.072996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.073024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.073410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.073441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.073674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.073710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.074068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.074097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.074490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.074521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.074874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.074903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.075281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.075312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.075657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.075687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.076063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.076093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.076544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.076575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.076943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.076972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.077329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.077360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.077732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.077762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.078129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.078189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.078529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.278 [2024-11-20 15:40:27.078559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-11-20 15:40:27.078925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.078954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.079301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.079333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.079741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.079769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.080130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.080166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.080436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.080470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.080725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.080754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.081178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.081209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.081436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.081464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.081859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.081887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.082254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.082286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.082675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.082704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.083065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.083095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.083453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.083840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.083869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.084238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.084269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.084633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.084663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.085026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.085055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.085415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.085444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.085879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.085908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.086318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.086347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.086703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.086731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.087105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.087135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.087531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.087562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.087906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.087935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.088283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.088313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.088685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.088713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.089085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.089114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.089500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.089537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.089878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.089907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.090284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.090315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.090715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.090964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.090992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.091271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.091301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.091509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.091537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.091896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.091925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.092176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.092207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.092554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.092584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.092947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.279 [2024-11-20 15:40:27.092975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-11-20 15:40:27.093341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.093371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.093723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.093752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.094125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.094154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.094531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.094561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.094838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.094866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.095267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.095297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.095676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.095706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.096071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.096100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.096364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.096394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.096739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.096768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.097113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.097143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.097524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.097553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.097922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.097950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.098207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.098240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.098612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.098643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.098902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.098930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.099300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.099332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.099687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.099715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.100134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.100171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.100572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.100601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.100953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.100983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.101364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.101394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.101767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.101796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.102139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.102178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.102558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.102586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.102955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.102984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.103304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.103335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.103715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.103744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.104117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.104146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.104530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.104566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.104938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.104967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.105345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.105707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.280 [2024-11-20 15:40:27.105736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.280 qpair failed and we were unable to recover it. 00:30:38.280 [2024-11-20 15:40:27.106088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.106116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.106507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.106537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.106893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.106920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.107289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.107320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.107606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.107635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.107890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.107918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.108270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.108300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.108681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.108709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.109064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.109092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.109487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.109517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.109883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.109912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.110372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.110402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.110771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.110799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.111157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.111198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.111536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.111565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.111932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.111960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.112334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.112365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.112744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.112773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.113148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.113187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.113574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.113602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.113940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.113968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.114340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.114371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.114733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.114762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.115011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.115039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.115403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.115432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.115817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.115847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.116218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.116247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.116598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.116627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.116993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.117022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.117401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.117431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.117820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.118085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.118113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.118530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.118560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.118926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.118954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.119319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.119349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.119705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.119733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.120097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.120132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.120495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.281 [2024-11-20 15:40:27.120524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.281 qpair failed and we were unable to recover it. 00:30:38.281 [2024-11-20 15:40:27.120887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.120915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.121326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.121357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.121712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.121740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.122104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.122132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.122514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.122543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.122887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.122917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.123283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.123314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.123684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.123713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.124067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.124097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.124513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.124542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.124869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.124897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.125205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.125234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.125604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.125633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.126062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.126092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.126447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.126477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.126841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.126870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.127243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.127272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.127647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.127676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.128035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.128065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.128472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.128502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.128870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.128898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.129320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.129349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.129726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.129753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.130203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.130235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.130485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.130513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.130908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.130938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.131304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.131334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.131594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.131623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.131967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.131997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.132274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.132304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.132639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.132668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.133047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.133076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.133466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.133497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.133748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.133776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.134129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.134165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.134550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.134579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.134831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.134863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.282 qpair failed and we were unable to recover it. 00:30:38.282 [2024-11-20 15:40:27.135201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.282 [2024-11-20 15:40:27.135230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.135570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.135610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.135978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.136006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.136390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.136422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.136785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.136813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.137215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.137246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.137610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.137639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.137891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.137923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.138272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.138302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.138655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.138685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.139051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.139079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.139473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.139502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.139845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.139874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.140224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.140253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.140637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.140666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.141020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.141050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.141438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.141468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.141796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.141826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.142194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.142224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.142475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.142506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.142908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.142937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.143284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.143314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.143684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.143714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.143969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.143998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.144812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.144840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.145185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.145216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.145604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.145631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.146004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.146034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.146390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.146421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.146800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.146828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.147182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.147211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.147583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.147612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.147877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.147905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.148285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.148314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.148674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.148703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.148945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.148972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.149321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.283 [2024-11-20 15:40:27.149352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.283 qpair failed and we were unable to recover it. 00:30:38.283 [2024-11-20 15:40:27.149670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.149699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.150080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.150109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.150383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.150414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.150745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.150780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.151110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.151139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.151534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.151563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.151901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.151930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.152281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.152311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.152676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.152705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.153064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.153094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.153467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.153497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.153782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.153809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.154186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.154216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.154617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.154645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.155009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.155037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.155391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.155421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.155786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.155814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.156180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.156210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.156616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.156857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.156888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.157241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.157272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.157637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.157665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.158025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.158053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.158412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.158442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.158873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.158901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.159273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.159683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.159712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.159963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.159995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.160241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.160273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.160654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.160684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.161039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.161068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.161455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.161485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.161834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.161863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.284 qpair failed and we were unable to recover it. 00:30:38.284 [2024-11-20 15:40:27.162238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.284 [2024-11-20 15:40:27.162267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.162511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.162539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.162907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.162937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.163310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.163339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.163706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.163734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.164100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.164128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.164512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.164541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.164890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.164919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.165284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.165314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.165686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.165714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.166108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.166142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.166563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.166592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.166900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.167276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.167307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.167682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.167711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.168064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.168093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.168448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.168477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.168814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.168843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.169218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.169248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.169585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.169612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.169865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.169893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.170230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.170260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.170662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.170691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.171052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.171082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.171429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.171460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.171771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.171800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.171976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.172006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.172420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.172449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.172826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.172856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.173227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.173258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.173531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.173560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.173904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.173934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.174263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.174292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.174704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.175055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.175085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.175505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.175534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.175871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.175900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.176245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.285 [2024-11-20 15:40:27.176275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.285 qpair failed and we were unable to recover it. 00:30:38.285 [2024-11-20 15:40:27.176661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.176691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.177048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.177077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.177470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.177500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.177752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.177781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.178203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.178232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.178585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.178621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.178991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.179020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.179379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.179409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.179781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.179810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.180179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.180210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.180524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.180553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.180820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.180849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.181216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.181252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.181496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.181525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.181886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.181916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.182201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.182232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.182575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.182605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.183010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.183040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.183406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.183438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.183811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.183840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.184207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.184236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.184593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.184621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.184984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.185012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.185355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.185385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.185751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.185781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.186148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.186188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.186544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.186573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.186925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.186954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.187318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.187348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.187692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.187720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.187994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.188023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.188301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.188331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.188702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.188732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.189113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.189142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.189493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.189524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.189877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.189905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.190280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.190311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.286 [2024-11-20 15:40:27.190667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.286 [2024-11-20 15:40:27.190695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.286 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.190954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.190982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.191353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.191384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.191771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.191799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.192058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.192087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.192456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.192486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.192857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.192887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.193259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.193288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.193532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.193560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.193921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.193950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.194310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.194341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.194709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.194737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.195127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.195168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.195530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.195560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.195922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.195951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.196432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.196469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.196731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.196763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.197115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.197145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.197497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.197527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.197899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.197927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.198293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.198323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.198704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.198732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.199112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.199140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.199370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.199399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.199804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.199833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.200202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.200231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.200624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.200653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.201010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.201039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.201428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.201458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.201828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.201857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.202224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.202255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.202645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.202673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.203107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.203135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.203503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.203532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.203896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.203925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.204102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.204133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.204377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.204406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.204761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.204789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.287 qpair failed and we were unable to recover it. 00:30:38.287 [2024-11-20 15:40:27.205063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.287 [2024-11-20 15:40:27.205091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.205456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.205485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.205848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.205876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.206235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.206265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.206508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.206537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.206920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.206949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.207355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.207384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.207645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.207674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.208019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.208049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.208413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.208443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.208806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.208834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.209201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.209231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.209622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.209652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.210059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.210424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.210454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.288 [2024-11-20 15:40:27.210825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.288 [2024-11-20 15:40:27.210854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.288 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.211214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.211245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.211614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.211658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.212016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.212044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.212393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.212423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.212786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.212815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.213215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.213245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.213494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.213522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.213844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.213874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.214246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.214277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.214633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.214662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.214826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.214858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.215237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.560 [2024-11-20 15:40:27.215267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.560 qpair failed and we were unable to recover it. 00:30:38.560 [2024-11-20 15:40:27.215615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.215643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.216003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.216031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.216412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.216442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.216807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.216836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.217202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.217231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.217592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.217622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.217970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.217998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.218340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.218370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.218733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.218761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.219204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.219233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.219696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.219724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.220078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.220107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.220454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.220484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.220843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.220871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.221237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.221265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.221517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.221548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.221907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.221936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.222299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.222331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.222700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.222728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.223093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.223122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.223553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.223582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.223853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.223881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.224272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.224301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.224676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.224706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.225074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.225103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.225491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.225520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.225869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.225899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.226141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.226193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.226552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.226581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.226976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.227339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.227368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.227743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.227771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.228141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.228177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.228527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.228556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.228802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.561 [2024-11-20 15:40:27.228834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.561 qpair failed and we were unable to recover it. 00:30:38.561 [2024-11-20 15:40:27.229249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.229279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.229631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.229660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.230016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.230044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.230400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.230430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.230721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.230749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.231110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.231138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.231488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.231519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.231882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.231910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.232279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.232310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.232679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.232709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.233072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.233100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.233459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.233489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.233852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.233882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.234247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.234276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.234646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.234676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.235038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.235066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.235440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.235470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.235819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.235847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.236188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.236217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.236578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.236607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.236967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.236995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.237328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.237358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.237628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.237656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.238030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.238059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.238387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.238417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.238805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.239175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.239205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.239568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.239596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.239841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.239873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.240247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.240277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.240642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.240671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.241030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.241058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.241466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.241497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.241790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.241818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.242231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.242609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.242638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.242894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.242926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.562 qpair failed and we were unable to recover it. 00:30:38.562 [2024-11-20 15:40:27.243299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.562 [2024-11-20 15:40:27.243329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.243583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.243614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.243998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.244357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.244387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.244757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.244786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.245151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.245188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.245548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.245577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.245845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.245873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.246235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.246266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.246636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.246665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.247014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.247044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.247453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.247483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.247856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.247884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.248271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.248653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.248681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.249037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.249065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.249439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.249468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.249817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.249846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.250199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.250228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.250518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.250546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.250904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.250932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.251286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.251315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.251682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.251710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.252017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.252046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.252469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.252505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.252842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.252871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.253310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.253339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.253700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.253727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.254104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.254133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.254405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.254434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.254785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.254813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.255186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.255215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.255566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.255596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.255964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.255992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.256413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.256757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.256786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.257151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.257188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.257535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.563 [2024-11-20 15:40:27.257564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.563 qpair failed and we were unable to recover it. 00:30:38.563 [2024-11-20 15:40:27.257924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.257953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.258318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.258347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.258704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.258732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.259091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.259120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.259488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.259517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.259885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.259913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.260276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.260305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.260693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.260723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.261085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.261114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.261353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.261385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.261750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.261778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.262138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.262183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.262527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.262555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.262927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.262957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.263318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.263717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.263745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.264087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.264115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.264479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.264509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.264873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.264901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.265265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.265294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.265669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.265698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.266069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.266097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.266463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.266493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.266734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.266765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.267129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.267167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.267528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.267556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.267923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.267958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.268312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.268342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.268705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.268733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.269088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.269117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.269469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.269498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.269894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.269922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.270274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.270305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.270548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.270580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.270934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.270963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.271303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.271335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.271678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.271706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.564 [2024-11-20 15:40:27.272041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.564 [2024-11-20 15:40:27.272070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.564 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.272306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.272339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.272720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.272750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.273095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.273124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.273481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.273880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.273908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.274195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.274225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.274596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.274625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.274878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.274906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.275259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.275289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.275661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.275690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.276038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.276067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.276430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.276459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.276822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.276851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.277221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.277250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.277579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.277607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.277967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.277997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.278340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.278369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.278715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.278743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.279109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.279137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.279528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.279559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.279926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.279954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.280335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.280365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.280720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.280748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.281036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.281465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.281495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.281849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.281878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.282221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.282251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.282605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.282634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.283022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.283063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.565 [2024-11-20 15:40:27.283401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.565 [2024-11-20 15:40:27.283431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.565 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.283781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.283809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.284179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.284208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.284558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.284946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.284973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.285390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.285419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.285775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.285805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.286173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.286203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.286601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.286629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.286991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.287020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.287391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.287421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.287779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.287807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.288184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.288213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.288570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.288599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.288961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.288989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.289370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.289399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.289771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.289799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.290178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.290208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.290567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.290595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.290964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.290993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.291417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.291446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.291775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.291803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.292167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.292196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.292473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.292502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.292837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.292866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.293231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.293261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.293628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.293656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.294028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.294057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.294426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.294457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.294691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.294722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.295057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.295086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.295516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.295547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.295802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.295830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.296181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.296211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.296576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.296604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.296990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.566 [2024-11-20 15:40:27.297018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.566 qpair failed and we were unable to recover it. 00:30:38.566 [2024-11-20 15:40:27.297463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.297492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.297825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.297854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.298217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.298246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.298621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.298656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.299019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.299048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.299395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.299425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.299797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.299825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.300177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.300207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.300561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.300589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.300956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.300985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.301349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.301377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.301715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.301743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.302101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.302130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.302401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.302430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.302783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.303181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.303212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.303572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.303599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.303983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.304013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.304392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.304423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.304780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.304808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.305183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.305212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.305584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.305611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.305998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.306026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.306375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.306405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.306766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.306794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.307153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.307193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.307543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.307571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.307945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.307973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.308276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.308306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.308686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.308714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.309060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.309090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.309305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.309758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.309788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.310145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.310194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.310519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.310550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.310911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.310940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.311302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.311331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.567 [2024-11-20 15:40:27.311687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.567 [2024-11-20 15:40:27.311716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.567 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.312063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.312091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.312434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.312464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.312729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.312758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.313104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.313132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.313517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.313546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.313906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.313941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.314300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.314329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.314700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.314728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.315083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.315110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.315466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.315495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.315857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.315885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.316244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.316274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.316647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.316676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.316976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.317004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.317374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.317403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.317765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.317793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.318137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.318181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.318557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.318585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.318943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.318972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.319336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.319366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.319670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.319700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.320089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.320118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.320480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.320510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.320758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.320789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.321177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.321616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.321644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.321889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.321917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.322302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.322332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.322627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.322654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.323021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.323050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.323387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.323418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.323791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.323819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.324123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.324154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.324532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.324561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.324898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.324926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.325295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.325325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.325697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.325727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.568 qpair failed and we were unable to recover it. 00:30:38.568 [2024-11-20 15:40:27.326093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.568 [2024-11-20 15:40:27.326122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.326537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.326566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.326951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.327334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.327364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.327716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.327744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.328102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.328131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.328522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.328551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.328957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.328985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.329359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.329394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.329753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.329782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.330146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.330182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.330565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.330593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.330968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.330997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.331400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.331429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.331813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.332180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.332209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.332546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.332574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.332941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.332969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.333323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.333354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.333727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.333754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.334121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.334149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.334511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.334540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.334896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.334926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.335289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.335319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.335687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.335715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.336079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.336108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.336467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.336496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.336751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.336779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.337131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.337166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.337489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.337519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.337876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.337903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.338178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.338208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.338563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.338592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.338951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.338979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.339338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.339367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.339727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.339756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.340079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.340446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.569 [2024-11-20 15:40:27.340476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.569 qpair failed and we were unable to recover it. 00:30:38.569 [2024-11-20 15:40:27.340836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.340863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.341224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.341254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.341605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.341633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.341991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.342019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.342388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.342417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.342777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.342807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.343173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.343203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.343569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.343598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.343847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.343875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.344255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.344284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.344640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.344674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.345030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.345059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.345439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.345468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.345717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.345745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.346100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.346129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.346513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.346542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.346888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.346918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.347275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.347304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.347578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.347606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.347965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.347993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.348359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.348389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.348747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.348775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.349136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.349170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.349531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.349560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.349821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.349850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.350215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.350245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.350623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.350651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.351011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.351039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.351386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.351417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.351648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.351675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.352040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.352070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.352406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.352436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.570 qpair failed and we were unable to recover it. 00:30:38.570 [2024-11-20 15:40:27.352792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.570 [2024-11-20 15:40:27.352820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.353177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.353208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.353589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.353618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.353973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.354001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.354427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.354456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.354817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.354846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.355198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.355227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.355591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.355619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.355984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.356012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.356385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.356414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.356781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.356810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.357188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.357216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.357560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.357588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.357960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.358302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.358332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.358687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.358717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.359074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.359102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.359480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.359510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.359869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.359908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.360284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.360672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.360701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.361060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.361089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.361502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.361531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.361767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.361800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.362148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.362186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.362540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.362961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.363322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.363352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.363755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.363785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.364062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.364092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.364422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.364453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.364806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.364835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.365193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.365224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.365585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.365614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.365970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.365999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.366366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.366396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.366760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.366789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.571 [2024-11-20 15:40:27.367153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.571 [2024-11-20 15:40:27.367205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.571 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.367575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.367604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.367962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.367991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.368341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.368372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.368602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.368631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.369054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.369083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.369459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.369490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.369850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.369878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.370234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.370264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.370632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.370661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.371025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.371054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.371401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.371431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.371787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.371816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.372059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.372088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.372454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.372485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.372832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.372861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.373078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.373107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.373493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.373887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.373916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.374278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.374308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.374636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.374665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.374898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.374932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.375193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.375222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.375579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.375607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.375959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.375987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.376348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.376378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.376773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.377134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.377169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.377555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.377585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.377995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.378023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.378401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.378431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.378787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.378815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.379179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.379209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.379615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.379643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.379996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.380024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.380271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.380304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.380589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.380619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.380981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.381009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.572 [2024-11-20 15:40:27.381389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.572 [2024-11-20 15:40:27.381418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.572 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.381780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.381808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.382173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.382204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.382581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.382610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.382967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.382996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.383423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.383453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.383807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.384225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.384254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.384594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.384624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.384939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.384967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.385304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.385335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.385702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.385731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.386107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.386136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.386547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.386577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.386944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.386973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.387338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.387367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.387735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.387763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.388125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.388153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.388506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.388534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.388937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.388965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.389310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.389340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.389699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.389726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.390098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.390126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.390545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.390581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.390935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.390963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.391344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.391373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.391733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.391762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.392120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.392148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.392512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.392542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.392895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.392923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.393231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.393261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.393648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.393676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.394018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.394047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.394395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.394425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.394797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.394826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.395092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.395120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.395305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.395336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.395687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.395716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.573 [2024-11-20 15:40:27.396080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.573 [2024-11-20 15:40:27.396107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.573 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.396472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.396502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.396875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.396904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.397190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.397219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.397480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.397509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.397855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.397885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.398233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.398262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.398503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.398532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.398798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.398825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.399073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.399101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.399502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.399532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.399893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.399921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.400283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.400313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.400682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.400711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.401079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.401106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.401539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.401568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.401939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.401969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.402333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.402363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.402718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.402747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.403104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.403132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.403513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.403543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.403907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.403937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.404275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.404304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.404694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.404722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.404979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.405007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.405235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.405270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.405643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.405672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.405962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.405991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.406361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.406392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.406751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.406779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.407142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.407181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.407571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.574 [2024-11-20 15:40:27.407599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.574 qpair failed and we were unable to recover it. 00:30:38.574 [2024-11-20 15:40:27.407961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.407989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.408386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.408416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.408798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.408826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.409186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.409215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.409581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.409611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.409966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.409994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.410413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.410444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.410833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.410862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.411221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.411251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.411637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.411666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.412020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.412050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.412411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.412440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.412817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.412845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.413111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.413139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.413517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.413546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.413904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.413933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.414299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.414329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.414605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.414633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.414981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.415011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.415389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.415419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.415799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.415829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.416189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.416219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.416577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.416606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.416988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.417016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.417371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.417399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.417645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.417677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.418071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.418100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.418669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.418700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.419047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.419076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.419330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.419360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.419785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.419812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.420051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.420080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.420432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.420462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.420824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.420860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.421239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.421269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.421629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.421658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.422026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.422055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.575 qpair failed and we were unable to recover it. 00:30:38.575 [2024-11-20 15:40:27.422403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.575 [2024-11-20 15:40:27.422432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.422799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.422827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.423193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.423224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.423587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.423615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.424020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.424048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.424414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.424444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.424721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.424749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.425125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.425153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.425420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.425793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.425821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.426122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.426172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.426577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.426605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.426959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.426987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.427348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.427379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.427748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.427777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.428165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.428194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.428548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.428578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.428831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.428860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.429215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.429244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.429608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.429636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.430019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.430047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.430415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.430445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.430801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.430831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.431224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.431255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.431594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.431623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.431966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.431994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.432374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.432404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.432764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.432793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.433177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.433207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.433590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.433618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.433868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.433897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.434172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.434201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.434551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.434580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.434957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.434987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.576 [2024-11-20 15:40:27.435332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.576 [2024-11-20 15:40:27.435362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.576 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.435726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.435754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.436122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.436169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.436520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.436549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.436911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.436939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.437330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.437360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.437740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.437768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.438129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.438176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.438407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.438437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.438812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.438841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.439206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.439236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.439629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.439892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.439921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.440297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.440326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.440695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.440723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.441006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.441035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.441279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.441309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.441668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.441697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.441958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.441987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.442345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.442375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.442626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.442654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.442952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.442981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.443327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.443359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.443705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.443734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.444098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.444128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.444335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.444365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.444712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.444742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.445112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.445141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.445549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.445905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.445940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.446322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.446352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.446714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.446743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.446989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.447019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.447356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.447387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.447738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.447767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.448228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.448258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.448618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.448646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.448995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.577 [2024-11-20 15:40:27.449023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.577 qpair failed and we were unable to recover it. 00:30:38.577 [2024-11-20 15:40:27.449288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.449318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.449682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.449712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.450083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.450111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.450471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.450502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.450748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.450777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.451172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.451202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.451621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.451650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.451883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.451911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.452254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.452285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.452667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.452696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.453058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.453087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.453526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.453556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.453886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.453922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.454317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.454347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.454757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.454785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.455170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.455202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.455585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.455614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.455839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.455868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.456128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.456169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.456520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.456549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.456919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.456947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.457203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.457234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.457613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.457641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.458010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.458038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.458399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.458429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.458807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.458835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.459217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.459246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.459646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.459674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.460054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.460083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.460454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.460484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.460862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.460890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.461258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.461294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.461541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.461569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.461941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.461970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.462325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.462355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.462713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.462741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.463035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.463064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.578 [2024-11-20 15:40:27.463402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.578 [2024-11-20 15:40:27.463432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.578 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.463774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.463804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.464203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.464448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.464476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.464849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.464878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.465275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.465304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.465537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.465565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.465847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.466207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.466237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.466623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.466651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.467082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.467110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.467472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.467501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.467751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.467780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.468148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.468184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.468591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.468927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.468956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.469309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.469339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.469698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.469726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.470084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.470113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.470473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.470503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.470868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.470896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.471271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.471302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.471684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.471712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.472080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.472109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.472532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.472562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.472919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.472948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.473316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.473345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.473607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.473635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.474021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.474049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.474440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.474469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.474906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.474935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.475193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.475223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.475617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.475645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.579 qpair failed and we were unable to recover it. 00:30:38.579 [2024-11-20 15:40:27.476082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.579 [2024-11-20 15:40:27.476110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.476363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.476398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.476692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.476721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.477058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.477088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.477441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.477471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.477832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.477860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.478221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.478250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.478615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.478643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.479089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.479117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.479488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.479517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.479878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.479906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.480262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.480292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.480649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.480677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.481049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.481078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.481456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.481486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.481854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.481883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.482234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.482264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.482613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.482644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.483004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.483033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.483384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.483415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.483759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.483787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.484085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.484112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.484467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.484496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.484749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.484778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.485131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.485168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.485537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.485565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.485914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.485944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.486285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.486315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.486563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.486595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.486949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.486978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.487316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.487346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.487709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.487737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.488097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.488125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.488542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.488573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.488925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.488955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.489321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.489350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.489581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.580 [2024-11-20 15:40:27.489612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.580 qpair failed and we were unable to recover it. 00:30:38.580 [2024-11-20 15:40:27.489971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.490001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.490258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.490288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.490662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.490691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.491005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.491033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.491440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.491476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.491813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.492203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.492232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.492612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.492641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.492993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.493022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.493266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.493295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.493563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.493595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.493939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.493968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.494330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.494361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.494722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.494750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.495167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.495197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.495545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.495573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.495935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.495963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.496329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.496359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.496733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.496762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.497000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.497028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.497460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.497490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.497783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.497811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.498151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.498190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.498589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.498617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.498986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.499014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.499377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.499408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.499754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.500139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.500175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.500524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.500554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.500907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.500935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.501177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.501209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.501490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.501520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.501887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.501914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.502188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.502218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.502577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.502605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.502972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.503000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.503362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.503391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.503756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.581 [2024-11-20 15:40:27.503784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.581 qpair failed and we were unable to recover it. 00:30:38.581 [2024-11-20 15:40:27.504043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.504071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.504422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.504452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.504816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.504844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.505207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.505236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.505585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.505613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.505863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.505890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.506239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.506274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.506631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.506661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.507032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.507060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.507435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.507465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.582 [2024-11-20 15:40:27.507832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.582 [2024-11-20 15:40:27.507861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.582 qpair failed and we were unable to recover it. 00:30:38.853 [2024-11-20 15:40:27.508112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.853 [2024-11-20 15:40:27.508144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.853 qpair failed and we were unable to recover it. 00:30:38.853 [2024-11-20 15:40:27.508476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.853 [2024-11-20 15:40:27.508508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.853 qpair failed and we were unable to recover it. 00:30:38.853 [2024-11-20 15:40:27.508873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.853 [2024-11-20 15:40:27.508901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.853 qpair failed and we were unable to recover it. 00:30:38.853 [2024-11-20 15:40:27.509278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.853 [2024-11-20 15:40:27.509308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.853 qpair failed and we were unable to recover it. 00:30:38.853 [2024-11-20 15:40:27.509545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.853 [2024-11-20 15:40:27.509577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.853 qpair failed and we were unable to recover it. 00:30:38.853 [2024-11-20 15:40:27.509945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.853 [2024-11-20 15:40:27.509973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.510342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.510372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.510732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.510760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.511169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.511199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.511626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.511655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.511992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.512021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.512366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.512396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.512764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.512792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.513175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.513205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.513586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.513615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.513958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.513987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.514341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.514370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.514721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.514749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.515104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.515133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.515518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.515547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.515965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.515994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.516357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.516387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.516746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.516774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.517145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.517181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.517571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.517600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.517975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.518003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.518377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.518406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.518767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.518795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.519046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.519074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.519428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.519458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.519821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.519849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.520196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.520226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.520584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.520613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.520979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.521007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.521399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.521429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.521795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.521828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.522195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.522224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.522547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.522575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.522920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.522949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.523304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.523333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.523563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.523595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.523947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.523976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.854 [2024-11-20 15:40:27.524313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.854 [2024-11-20 15:40:27.524346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.854 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.524710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.524739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.525092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.525120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.525492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.525522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.525883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.525911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.526293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.526322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.526695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.526724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.527101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.527130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.527396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.527429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.527802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.527830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.528030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.528058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.528461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.528491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.528825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.528854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.529220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.529249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.529615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.529643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.530005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.530033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.530401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.530430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.530682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.530714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.531075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.531103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.531507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.531536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.531879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.531908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.532259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.532290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.532641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.532671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.533031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.533060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.533407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.533437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.533819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.533847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.534100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.534129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.534532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.534562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.534927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.534955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.535293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.535322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.535682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.535711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.536151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.536197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.536558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.536586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.536936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.536972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.537309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.537339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.537699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.537727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.538084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.538113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.538523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.855 [2024-11-20 15:40:27.538552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.855 qpair failed and we were unable to recover it. 00:30:38.855 [2024-11-20 15:40:27.538906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.538936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.539213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.539243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.539585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.539613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.539992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.540020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.540399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.540429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.540672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.540703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.541072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.541100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.541528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.541559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.541909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.541937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.542300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.542330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.542669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.542698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.543061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.543089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.543535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.543565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.543932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.543960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.544307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.544337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.544708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.544736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.545187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.545218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.545571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.545600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.545974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.546002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.546250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.546280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.546661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.546690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.547054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.547084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.547471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.547501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.547861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.547890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.548248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.548277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.548627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.548657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.549024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.549052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.549395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.549426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.549872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.549900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.550149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.550187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.550538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.550567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.550916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.550945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.551306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.551334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.551678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.551706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.552066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.552094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.552488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.552524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.552882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.552910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.856 qpair failed and we were unable to recover it. 00:30:38.856 [2024-11-20 15:40:27.553356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.856 [2024-11-20 15:40:27.553385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.553756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.553784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.554142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.554178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.554540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.554568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.554927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.554957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.555306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.555335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.555698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.555726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.556128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.556500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.556530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.556883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.556914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.557174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.557205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.557593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.557622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.557986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.558014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.558425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.558455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.558816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.558844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.559195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.559226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.559607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.559635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.559994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.560022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.560399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.560429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.560792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.560820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.561194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.561223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.561587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.561615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.561987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.562015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.562173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.562204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.562597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.562626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.562998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.563028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.563394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.563423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.563778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.563807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.564210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.564563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.564591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.564954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.564983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.565345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.565374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.565739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.565767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.566131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.566166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.566414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.566446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.566811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.566840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.567207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.567237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.567500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.567528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.857 qpair failed and we were unable to recover it. 00:30:38.857 [2024-11-20 15:40:27.567915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.857 [2024-11-20 15:40:27.567949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.568307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.568337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.568692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.568720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.569061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.569091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.569447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.569478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.569845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.569873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.570239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.570268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.570630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.570658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.571024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.571052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.571303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.571332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.571697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.571726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.572091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.572120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.572472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.572502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.572862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.573251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.573280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.573637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.573666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.574001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.574384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.574413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.574791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.574819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.575169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.575198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.575538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.575567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.575954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.575983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.576342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.576372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.576756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.576784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.577148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.577185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.577555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.577583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.577946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.577974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.578251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.578281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.578659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.578687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.579122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.579150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.579524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.579552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.579913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.579941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.858 [2024-11-20 15:40:27.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.858 [2024-11-20 15:40:27.580331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.858 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.580698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.580726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.581089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.581116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.581473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.581503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.581753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.581784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.582049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.582078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.582414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.582443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.582806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.582836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.583202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.583237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.583605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.583634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.583868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.583896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.584284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.584314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.584676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.584705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.585056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.585084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.585453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.585483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.585853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.585880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.586242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.586271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.586627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.586656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.587025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.587054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.587418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.587448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.587815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.587844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.588207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.588237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.588638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.588667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.588978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.589007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.589365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.589751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.589779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.590142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.590193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.590356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.590388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.590651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.590682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.591044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.591427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.591457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.591811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.591840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.592236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.592625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.592653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.593033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.593069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.593494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.593525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.593737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.593766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.594147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.594185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.859 [2024-11-20 15:40:27.594591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.859 [2024-11-20 15:40:27.594619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.859 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.594960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.594989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.595333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.595363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.595713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.595741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.596095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.596125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.596473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.596504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.596842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.596871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.597234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.597264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.597616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.597644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.598011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.598040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.598387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.598424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.598783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.598813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.599185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.599215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.599572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.599600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.599965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.599993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.600360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.600390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.600761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.600789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.601176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.601206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.601539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.601569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.601933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.601962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.602325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.602661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.602690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.603049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.603078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.603428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.603458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.603815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.603844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.604209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.604238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.604470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.604501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.604763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.604791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.605110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.605476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.605505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.605763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.605790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.606156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.606195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.606551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.606579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.606948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.606977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.607337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.607367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.607736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.607764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.608118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.608147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.608544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.608574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.860 [2024-11-20 15:40:27.608942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.860 [2024-11-20 15:40:27.608970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.860 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.609222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.609254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.609645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.609674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.610039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.610067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.610501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.610531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.610877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.610907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.611282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.611311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.611677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.611705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.611987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.612016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.612389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.612418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.612787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.612815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.613170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.613201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.613537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.613572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.613910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.613939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.614306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.614336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.614703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.614731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.615169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.615199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.615550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.615579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.615937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.615966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.616321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.616352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.616712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.616740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.617126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.617154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.617520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.617549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.617779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.617811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.618178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.618208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.618585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.618615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.618973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.619002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.619346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.619375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.619639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.619667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.620010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.620038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.620395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.620427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.620672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.620701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.621033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.621061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.621432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.621463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.621826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.621855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.622207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.622237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.622593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.622621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.861 [2024-11-20 15:40:27.622983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.861 [2024-11-20 15:40:27.623012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.861 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.623262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.623294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.623664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.623692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.624054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.624082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.624418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.624449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.624808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.624836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.625193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.625242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.625592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.625620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.625990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.626019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.626386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.626415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.626777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.626806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.627148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.627186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.627595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.627623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.627984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.628014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.628393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.628424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.628780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.628814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.629179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.629210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.629611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.629639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.630015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.630044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.630417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.630445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.630852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.630881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.631219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.631248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.631610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.631638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.631997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.632025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.632460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.632490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.632847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.632877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.633247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.633277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.633646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.633674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.634029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.634059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.634420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.634450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.634811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.634839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.635211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.635240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.635615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.635644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.636007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.636035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.636393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.636423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.636785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.636813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.637178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.637208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.862 qpair failed and we were unable to recover it. 00:30:38.862 [2024-11-20 15:40:27.637582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.862 [2024-11-20 15:40:27.637610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.637862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.637890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.638242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.638271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.638687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.638715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.639057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.639085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.639465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.639496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.639861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.639889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.640321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.640352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.640653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.640998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.641027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.641385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.641416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.641855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.641883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.642247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.642276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.642640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.642669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.643019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.643047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.643431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.643461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.643760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.643790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.643958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.643988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.644239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.644270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.644529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.644560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.644927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.644957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.645312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.645342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.645698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.645728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.646094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.646124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.646535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.646565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.646917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.646947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.647311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.647343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.647695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.647724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.648096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.648125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.648486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.648517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.648877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.648906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.649254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.649285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.649661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.649693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.650047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.863 [2024-11-20 15:40:27.650076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.863 qpair failed and we were unable to recover it. 00:30:38.863 [2024-11-20 15:40:27.650424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.650456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.650713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.650742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.651098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.651127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.651423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.651457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.651841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.651870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.652239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.652270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.652550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.652917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.652947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.653300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.653331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.653695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.653726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.653912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.653941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.654206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.654242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.654640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.654670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.655048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.655077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.655338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.655369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.655722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.655752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.656101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.656131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.656581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.656612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.656984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.657013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.657416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.657447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.657805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.657835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.658200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.658230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.658492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.658522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.658885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.658916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.659278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.659308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.659685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.659715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.660085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.660115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.660561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.660592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.660963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.660993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.661345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.661375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.661635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.661664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.662025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.662055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.662413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.662443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.662774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.662803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.663146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.663184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.663531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.663560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.663923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.663952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.864 [2024-11-20 15:40:27.664305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.864 [2024-11-20 15:40:27.664336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.864 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.664587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.664978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.665007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.665430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.665461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.665833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.665861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.666137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.666172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.666597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.666626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.666962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.666991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.667379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.667409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.667776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.667804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.668178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.668208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.668558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.668589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.668956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.668984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.669301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.669332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.669683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.669718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.669965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.669994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.670383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.670412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.670745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.670775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.671213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.671243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.671609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.671637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.672002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.672030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.672310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.672340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.672708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.672736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.673093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.673123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.673508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.673541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.673805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.673834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.674192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.674222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.674468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.674496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.674878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.674907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.675256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.675287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.675552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.675580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.675935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.675964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.676333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.676362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.676714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.676743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.677103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.677133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.677571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.677600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.678749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.678796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.865 [2024-11-20 15:40:27.679203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.865 [2024-11-20 15:40:27.679235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.865 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.679597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.679625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.679995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.680023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.680379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.680410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.680785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.680814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.681182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.681212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.681477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.681510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.681865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.681894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.682317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.682349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.682618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.682647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.683004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.683034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.683414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.683715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.683743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.684092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.684122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.684476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.684506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.684843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.684872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.685241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.685271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.685645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.685681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.686045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.686074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.686458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.686488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.686883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.687260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.687290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.687672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.687700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.688125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.688153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.688594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.688623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.688969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.688997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.689377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.689406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.689756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.689785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.690119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.690147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.690517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.690546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.690924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.690952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.691307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.691338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.691597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.691625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.691976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.692005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.692386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.692416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.692778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.692807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.693041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.866 [2024-11-20 15:40:27.693073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.866 qpair failed and we were unable to recover it. 00:30:38.866 [2024-11-20 15:40:27.693421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.693451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.693787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.693817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.694204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.694233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.694599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.694627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.694993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.695021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.695369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.695399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.695744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.695773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.696201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.696232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.696616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.696645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.697012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.697040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.697298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.697328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.697714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.697743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.698127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.698156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.698517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.698546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.698907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.698935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.699305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.699335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.699694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.699723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.700111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.700140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.700510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.700538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.700902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.700931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.701296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.701333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.701692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.701720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.702111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.702141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.702487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.702516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.702882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.702911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.703276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.703307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.703705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.703734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.704068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.704097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.867 qpair failed and we were unable to recover it. 00:30:38.867 [2024-11-20 15:40:27.704460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.867 [2024-11-20 15:40:27.704490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.704872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.704902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.705169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.705198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.705538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.705566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.705822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.705850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.706169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.706198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.706583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.706613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.706975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.707005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.707347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.707377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.707749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.707777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.708135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.708178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.708585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.708614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.708974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.709003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.709386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.709416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.709783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.709811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.710179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.710209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.710479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.710507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.710864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.710894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.711235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.711265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.711626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.711655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.712035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.712064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.712440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.712470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.712843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.712871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.713123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.713151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.713544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.713934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.713962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.714352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.714384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.714613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.714641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.714921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.714950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.715290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.715320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.715587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.715615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.715986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.716014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.716389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.716425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.716772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.716800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.717198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.717229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.717460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.717493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.717857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.717887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.868 [2024-11-20 15:40:27.718294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.868 [2024-11-20 15:40:27.718323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.868 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.718694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.718723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.719088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.719116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.719512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.719542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.719786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.719814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.720181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.720212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.720616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.720644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.720979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.721018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.721387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.721416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.721837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.721866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.722231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.722260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.722646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.722674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.723043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.723071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.723354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.723383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.723757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.723786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.724197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.724227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.724641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.724672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.725037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.725067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.725449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.725479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.725839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.725869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.726236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.726266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.726621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.726650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.726997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.727026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.727388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.727418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.727789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.727817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.728236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.728266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.728640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.728669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.729055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.729083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.729511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.729543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.729904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.729933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.730273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.730303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.730689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.730717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.731081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.731110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.731248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.731280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.731670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.731698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.732069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.732105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.732544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.732575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.732952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.732980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.869 qpair failed and we were unable to recover it. 00:30:38.869 [2024-11-20 15:40:27.733359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.869 [2024-11-20 15:40:27.733390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.733746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.733775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.734165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.734196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.734455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.734484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.734865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.734893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.735150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.735187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.735558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.735586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.735821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.735848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.736215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.736244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.736621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.736649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.737013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.737042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.737297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.737330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.737693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.737722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.738087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.738115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.738530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.738559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.738917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.738946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.739360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.739389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.739752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.739780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.740173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.740203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.740589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.740617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.741013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.741387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.741417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.741780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.741808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.742175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.742206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.742623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.742653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.743015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.743043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.743448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.743477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.743881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.743909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.744153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.744204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.744577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.744605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.744965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.744993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.745346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.745377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.745747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.745775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.746135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.746170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.746530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.746923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.746953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.747290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.747319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.747680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.747714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.870 qpair failed and we were unable to recover it. 00:30:38.870 [2024-11-20 15:40:27.748073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.870 [2024-11-20 15:40:27.748101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.748490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.748520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.748888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.748917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.749278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.749309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.749673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.749702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.750053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.750082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.750446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.750474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.750820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.750848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.751226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.751255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.751607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.751634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.751998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.752026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.752398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.752428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.752786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.752815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.753182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.753213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.753581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.753609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.753993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.754022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.754311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.754340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.754753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.755092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.755122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.755494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.755524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.755891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.755919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.756267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.756298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.756712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.757071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.757100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.757435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.757464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.757837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.757866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.758228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.758259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.758619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.758648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.759021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.759049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.759421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.759450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.759702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.759733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.760098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.760127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.760539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.760569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.760922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.760950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.761299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.761328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.761709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.761738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.762152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.762191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.762559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.871 [2024-11-20 15:40:27.762588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.871 qpair failed and we were unable to recover it. 00:30:38.871 [2024-11-20 15:40:27.762951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.762979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.763338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.763375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.763739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.763768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.764066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.764095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.764502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.764532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.764880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.764908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.765274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.765304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.765611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.765641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.766008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.766036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.766320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.766708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.766736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.767117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.767146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.767510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.767540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.767942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.767971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.768317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.768348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.768706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.768735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.769114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.769142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.769399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.769428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.769594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.769625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.769979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.770008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.770357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.770388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.770741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.770769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.771022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.771050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.771418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.771769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.771799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.772154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.772196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.772538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.772936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.772964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.773382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.773413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.773767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.773795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.774214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.774244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.774600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.774629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.774997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.775025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.872 qpair failed and we were unable to recover it. 00:30:38.872 [2024-11-20 15:40:27.775305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.872 [2024-11-20 15:40:27.775335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.775713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.775741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.776079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.776107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.776468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.776497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.776865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.776893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.777254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.777284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.777685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.777715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.778084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.778112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.778480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.778515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.778875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.778904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.779267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.779296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.779564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.779592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.779952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.779980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.780325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.780355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.780710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.780738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.781108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.781137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.781509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.781538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.781905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.781934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.782192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.782224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.782571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.782599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.782957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.782985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.783342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.783372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.783668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.783697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.784062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.784090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.784447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.784478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.784828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.784856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.785216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.785246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.785604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.785633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.785996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.786024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.786282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.786312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.786708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.786736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.787100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.787128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.787538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.787567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.787932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.787961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.788323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.788353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.788724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.788753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.789100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.789129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.789500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.873 [2024-11-20 15:40:27.789530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.873 qpair failed and we were unable to recover it. 00:30:38.873 [2024-11-20 15:40:27.789871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.789900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.790243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.790273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.790539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.790567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.790988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.791017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.791355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.791386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.791738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.791767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.792115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.792143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.792525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.792914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.792943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.793306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.793335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.793706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.793740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.794101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.794130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.794381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.794413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.794783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.794812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.795190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.795221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.795576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.795604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.795967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.795996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.796256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.796286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.796674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.796702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.797078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.797107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.797471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.797500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.797865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.797893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.798273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.798302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.798653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.798681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.799042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.799071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.799453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.799484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.799721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.799749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.800112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.800319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.800352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.800714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.800743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.801102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.801130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:38.874 [2024-11-20 15:40:27.801556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.874 [2024-11-20 15:40:27.801585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:38.874 qpair failed and we were unable to recover it. 00:30:39.148 [2024-11-20 15:40:27.801859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.148 [2024-11-20 15:40:27.801890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.802260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.802289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.802637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.802667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.803032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.803060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.803400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.803431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.803789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.803818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.804190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.804221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.804579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.804607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.804897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.804925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.805174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.805204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.805558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.805587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.805947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.805975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.806342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.806371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.806705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.806734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.807093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.807121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.807542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.807573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.807908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.807936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.808287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.808319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.808694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.808729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.809085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.809113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.809478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.809508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.809875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.809904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.810141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.810177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.810558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.810587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.810931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.810961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.811308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.811339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.811674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.811705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.812101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.812131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.812502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.812542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.812900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.812930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.813290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.813320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.813667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.813698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.814085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.814114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.814522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.814553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.814904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.814934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.815301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.815332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.815705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.815734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.149 [2024-11-20 15:40:27.816091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.149 [2024-11-20 15:40:27.816120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.149 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.816466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.816496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.816854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.816883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.817249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.817280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.817643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.817673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.818032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.818061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.818432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.818462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.818831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.818860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.819111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.819140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.819584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.819615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.819986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.820431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.820463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.820794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.820824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.821188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.821218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.821593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.821623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.821781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.821814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.822197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.822227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.822593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.822622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.822991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.823020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.823258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.823291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.823676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.823705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.824061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.824095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.824500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.824531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.824889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.824920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.825286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.825317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.825767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.825796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.826180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.826210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.826587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.826616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.826978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.827006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.827382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.827758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.827786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.828143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.828180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.828548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.828576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.828927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.828955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.829317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.829346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.829764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.829793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.830167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.830197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.830562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.830591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.150 qpair failed and we were unable to recover it. 00:30:39.150 [2024-11-20 15:40:27.830955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.150 [2024-11-20 15:40:27.830983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.831342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.831373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.831738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.831767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.832134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.832172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.832542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.832571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.832884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.832915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.833291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.833322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.833680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.833710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.834071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.834100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.834479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.834510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.834872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.834904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.835263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.835293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.835671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.835700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.836047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.836077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.836444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.836475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.836741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.836769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.837124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.837152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.837521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.837551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.837906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.837935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.838302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.838332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.838583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.838615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.838963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.838994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.839376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.839405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.839763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.839791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.840151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.840192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.840575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.840604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.840961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.840991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.841341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.841373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.841674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.842105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.842134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.842493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.842522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.842899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.843237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.843269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.843613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.843644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.844002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.844400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.844430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.844829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.844857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.151 [2024-11-20 15:40:27.845232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.151 [2024-11-20 15:40:27.845262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.151 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.845613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.845643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.846008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.846037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.846384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.846414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.846785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.846814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.847156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.847195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.847535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.847565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.847928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.847956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.848396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.848427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.848775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.848805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.849176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.849207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.849554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.849583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.849950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.849980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.850321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.850356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.850708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.850737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.851103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.851132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.851481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.851514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.851872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.851901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.852256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.852286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.852661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.852690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.853059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.853087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.853461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.853492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.853836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.853864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.854152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.854188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.854554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.854583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.854944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.854972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.855323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.855352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.855741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.855770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.856026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.856057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.856415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.856445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.856802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.856831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.857190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.857219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.857580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.857608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.857978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.858007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.858426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.858455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.858815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.858842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.859083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.859112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.859484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.859513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.152 [2024-11-20 15:40:27.859865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.152 [2024-11-20 15:40:27.859894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.152 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.860257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.860288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.860660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.860689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.861063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.861099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.861436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.861829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.861860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.862209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.862239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.862593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.862621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.862993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.863022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.863476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.863506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.863766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.863797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.864147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.864193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.864607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.864637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.864976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.865004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.865352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.865382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.865795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.865831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.866178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.866208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.866543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.866571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.866937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.866965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.867337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.867365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.867657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.867685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.868048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.868077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.868425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.868835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.868865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.869227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.869258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.869594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.869622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.869951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.869979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.870346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.870375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.870741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.870769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.871147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.871184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.871587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.871616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.871973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.872001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.153 [2024-11-20 15:40:27.872426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.153 [2024-11-20 15:40:27.872457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.153 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.872802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.872832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.873182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.873212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.873615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.873644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.873977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.874005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.874367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.874398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.874750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.874778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.875140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.875177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.875437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.875468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.875825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.876093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.876125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.876493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.876523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.876898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.876927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.877275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.877306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.877680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.877708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.877965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.877993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.878386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.878416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.878782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.878810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.879175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.879206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.879596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.879624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.880002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.880373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.880403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.880774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.880802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.881248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.881284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.881637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.881665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.881989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.882017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.882390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.882420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.882783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.882812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.883176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.883206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.883569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.883597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.883972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.884000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.884396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.884426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.884788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.884817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.885179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.885209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.885556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.885584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.885946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.885974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.886339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.886368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.886728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.886757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.154 [2024-11-20 15:40:27.887138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.154 [2024-11-20 15:40:27.887175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.154 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.887524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.887554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.887795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.887827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.888185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.888216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.888459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.888490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.888850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.888879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.889119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.889150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.889518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.889547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.889927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.889955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.890306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.890344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.890701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.890730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.891092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.891120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.891593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.891624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.891984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.892013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.892384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.892414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.892773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.892801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.893051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.893083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.893439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.893469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.893820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.893849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.894141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.894180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.894572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.894600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.894978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.895006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.895473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.895503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.895851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.895881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.896248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.896278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.896674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.896715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.896973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.897002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.897370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.897399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.897760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.897790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.898138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.898175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.898594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.898622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.898952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.898980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.899222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.899255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.899628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.899657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.899991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.900020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.900299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.900329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.900727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.900755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.901110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.901139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.155 qpair failed and we were unable to recover it. 00:30:39.155 [2024-11-20 15:40:27.901404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.155 [2024-11-20 15:40:27.901436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.901818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.901848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.902205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.902234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.902618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.902646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.903029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.903057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.903394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.903424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.903792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.903822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.904180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.904210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.904565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.904594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.904967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.905257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.905628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.905658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.905894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.905926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.906288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.906318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.906717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.907080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.907108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.907401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.907432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.907839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.907868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.908238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.908622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.908652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.908942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.908972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.909342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.909373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.909739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.909768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.910178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.910208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.910556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.910586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.910946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.910975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.911250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.911655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.911690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.912067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.912095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.912455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.912486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.912811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.912840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.913286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.913316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.913677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.913705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.914073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.914402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.914432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.914822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.914850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.915001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.915032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.156 qpair failed and we were unable to recover it. 00:30:39.156 [2024-11-20 15:40:27.915417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-20 15:40:27.915448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.915812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.915841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.916219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.916249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.916630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.916658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.916948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.916976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.917379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.917409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.917778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.917807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.918177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.918206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.918636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.918666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.918903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.918935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.919321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.919350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.919725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.919753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.920109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.920138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.920530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.920559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.920911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.920940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.921317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.921347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.921596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.921625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.921993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.922022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.922287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.922316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.922658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.922687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.923051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.923080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.923450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.923480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.923848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.923877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.924235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.924266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.924630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.924658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.924926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.924954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.925338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.925693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.925722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.926178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.926207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.926582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.926610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.926990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.927025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.927454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.927484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.927852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.927880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.928253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.928283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.928653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.928681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.929043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.929071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.929490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.157 [2024-11-20 15:40:27.929520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.157 qpair failed and we were unable to recover it. 00:30:39.157 [2024-11-20 15:40:27.929886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.929914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.930271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.930301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.930683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.930712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.931060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.931089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.931349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.931379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.931724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.931754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.932101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.932129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.932489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.932519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.932895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.933250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.933280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.933654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.933682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.933942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.933973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.934318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.934348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.934690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.934719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.935058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.935087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.935456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.935486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.935844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.935872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.936256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.936286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.936666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.936694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.936955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.936983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.937454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.937485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.937857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.937885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.938293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.938323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.938685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.938714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.939079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.939108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.939445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.939483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.939820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.939849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.940190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.940221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.940636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.940665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.941051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.941401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.941432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.941800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.941830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.158 [2024-11-20 15:40:27.942191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.158 [2024-11-20 15:40:27.942221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.158 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.942588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.942623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.942984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.943013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.943381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.943681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.943711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.944052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.944083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.944439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.944469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.944827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.944855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.945216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.945246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.945605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.945633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.945999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.946028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.946269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.946298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.946559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.946589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.946950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.946979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.947224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.947253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.947620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.947649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.948014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.948043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.948303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.948333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.948608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.948637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.948999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.949029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.949383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.949414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.949774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.949804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.950044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.950074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.950415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.950444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.950812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.950842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.951213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.951242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.951618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.951646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.952023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.952052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.952482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.952512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.952741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.952769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.953125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.953154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.953533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.953562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.953995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.954023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.954391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.954421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.954828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.954857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.955206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.955236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.955620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.955648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.956025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.956053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.159 qpair failed and we were unable to recover it. 00:30:39.159 [2024-11-20 15:40:27.956409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.159 [2024-11-20 15:40:27.956440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.956815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.956843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.957217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.957247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.957584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.957617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.957971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.958000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.958367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.958398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.958849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.959248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.959278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.959666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.959695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.960046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.960074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.960512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.960542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.960893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.960921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.961155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.961191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.961610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.961639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.962020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.962049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.962417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.962454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.962812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.962840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.963185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.963216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.963580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.963608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.963974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.964002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.964391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.964422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.964590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.964622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.964986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.965015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.965362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.965391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.965777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.965806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.966177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.966207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.966621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.966650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.967015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.967043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.967429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.967459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.967837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.967867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.968231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.968261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.968614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.968644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.968984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.969012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.969480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.969509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.969855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.969883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.970259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.970288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.970662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.970689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.971056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.160 [2024-11-20 15:40:27.971085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.160 qpair failed and we were unable to recover it. 00:30:39.160 [2024-11-20 15:40:27.971409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.971439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.971830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.971859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.972225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.972255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.972611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.972639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.973019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.973047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.973403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.973443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.973790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.973819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.974174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.974203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.974581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.974609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.974959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.974988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.975352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.975382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.975748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.975776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.976143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.976179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.976550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.976578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.976939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.976967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.977317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.977346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.977738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.978100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.978129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.978506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.978536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.978897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.978925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.979292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.979322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.979690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.979719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.980079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.980107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.980459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.980490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.980858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.980887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.981234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.981263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.981634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.981662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.982029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.982057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.982414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.982444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.982813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.982841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.983196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.983226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.983464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.983496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.983837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.983867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.984228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.984258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.984634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.984664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.985029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.985058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.985391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.985421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.985771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.161 [2024-11-20 15:40:27.985801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.161 qpair failed and we were unable to recover it. 00:30:39.161 [2024-11-20 15:40:27.986171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.986201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.986579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.986967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.986996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.987365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.987394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.987763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.987792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.988153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.988192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.988532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.988562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.988934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.988969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.989307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.989338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.989675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.989703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.989961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.989989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.990349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.990379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.990730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.990759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.991182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.991213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.991577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.991606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.991976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.992005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.992382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.992412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.992780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.992808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.993192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.993222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.993576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.993604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.993959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.993987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.994349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.994379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.994577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.994606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.994847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.994876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.995205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.995235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.995614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.995642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.996047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.996075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.996318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.996351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.996737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.996766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.997127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.997156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.997564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.997593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.997964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.997992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.998358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.998388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.162 [2024-11-20 15:40:27.998751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.162 [2024-11-20 15:40:27.998780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.162 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:27.999140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:27.999179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:27.999534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:27.999562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:27.999930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:27.999958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.000325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.000355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.000739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.000768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.001019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.001047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.001404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.001434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.001798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.001827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.002187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.002216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.002583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.002611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.002966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.002994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.003381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.003410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.003772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.003800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.004229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.004266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.004559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.004587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.004935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.004964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.005225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.005255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.005646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.005675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.006037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.006066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.006403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.006432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.006659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.006690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.007067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.007479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.007510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.007864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.007894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.008261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.008291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.008676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.008704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.009080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.009479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.009509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.009874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.009902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.010315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.010674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.010702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.011076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.011105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.011463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.011493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.011858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.011888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.012119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.012150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.012616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.012646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.013042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.013070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.163 qpair failed and we were unable to recover it. 00:30:39.163 [2024-11-20 15:40:28.013400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.163 [2024-11-20 15:40:28.013432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.013799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.013828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.014063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.014095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.014470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.014502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.014874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.014904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.015278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.015307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.015652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.015682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.016044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.016072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.016452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.016482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.016730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.016762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.017143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.017184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.017530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.017558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.017915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.017944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.018312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.018342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.018701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.018729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.019090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.019119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.019470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.019506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.019846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.019875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.020243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.020272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.020636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.020664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.020917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.020945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.021192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.021490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.021518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.021861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.021890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.022257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.022287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.022667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.022696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.023056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.023085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.023454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.023483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.023853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.023881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.024308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.024338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.024694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.024723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.024968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.025000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.025338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.025368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.025729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.025757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.026071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.026101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.026444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.026474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.026845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.026872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.027232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.164 [2024-11-20 15:40:28.027262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.164 qpair failed and we were unable to recover it. 00:30:39.164 [2024-11-20 15:40:28.027645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.027673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.028037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.028065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.028429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.028459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.028822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.028851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.029215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.029244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.029622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.029651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.030003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.030031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.030381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.030410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.030770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.030798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.031174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.031203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.031549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.031579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.031924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.031953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.032155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.032204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.032590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.032619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.032914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.033268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.033298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.033559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.033587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.034015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.034043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.034413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.034443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.034808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.034836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.035204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.035233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.035599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.035627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.035987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.036015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.036420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.036802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.037142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.037182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.037423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.037451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.037841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.037870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.038227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.038258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.038632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.038662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.039007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.039035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.039407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.039436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.040187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.040217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.040566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.040595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.040942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.040971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.041337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.041367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.041726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.165 [2024-11-20 15:40:28.041755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.165 qpair failed and we were unable to recover it. 00:30:39.165 [2024-11-20 15:40:28.042118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.042146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.042576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.042605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.042940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.042971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.043307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.043337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.043642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.043671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.044031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.044060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.044447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.044477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.044832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.044867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.045223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.045253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.045609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.045638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.046005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.046033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.046406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.046436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.046794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.046822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.047280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.047310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.047693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.047721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.048081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.048110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.048372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.048401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.048679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.048708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.049053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.049081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.049448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.049478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.049725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.049756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.050134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.050171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.050519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.050547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.050907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.050934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.051279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.051310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.051691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.051721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.052063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.052092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.052490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.052520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.052877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.052907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.053267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.053296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.053539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.053571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.053927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.053956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.054334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.054363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.054725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.054753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.055126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.055156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.055550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.055579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.055944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.055972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.056336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.166 [2024-11-20 15:40:28.056366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.166 qpair failed and we were unable to recover it. 00:30:39.166 [2024-11-20 15:40:28.056722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.056751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.057115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.057143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.057505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.057534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.057770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.057799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.058189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.058218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.058588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.058616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.058996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.059024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.059385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.059413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.059782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.059810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.060174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.060211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.060579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.060607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.060966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.060994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.061349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.061379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.061734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.061763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.062150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.062489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.062517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.062873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.062901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.063252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.063281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.063637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.063667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.064033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.064061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.064402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.064433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.064789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.064817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.065189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.065219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.065504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.065533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.065890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.065918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.066289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.066319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.066717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.066745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.066959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.066987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.067254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.067283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.067630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.067658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.068023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.068052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.068398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.068427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.068788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.068818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.167 [2024-11-20 15:40:28.069179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.167 [2024-11-20 15:40:28.069210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.167 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.069598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.069627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.069992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.070020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.070436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.070467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.070705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.070733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.071100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.071128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.071519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.071548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.071895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.071925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.072288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.072318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.072654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.072682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.073046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.073074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.073423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.073453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.073786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.073815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.074187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.074217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.074573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.074602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.074975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.075005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.075452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.075487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.075826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.075855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.076228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.076257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.076493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.076525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.076928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.076958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.077208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.077237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.077580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.077609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.077973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.078001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.078374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.078405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.078776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.079169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.079199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.079557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.079584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.079952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.079981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.080348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.080378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.080737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.080766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.081107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.081135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.081503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.081532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.081891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.081919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.082268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.082298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.082589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.082617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.082973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.083362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.083391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.168 [2024-11-20 15:40:28.083755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.168 [2024-11-20 15:40:28.083784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.168 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.084130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.084166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.084526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.084555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.084922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.084951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.085360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.085389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.085737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.085766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.086116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.086145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.086534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.086563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.086929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.086957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.087304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.087335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.087714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.087743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.088106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.088135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.088497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.088527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.088890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.089166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.089198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.089549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.089578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.089941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.089969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.090318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.090347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.090706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.090741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.091107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.091136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.091506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.091536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.091893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.091921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.092279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.092310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.092673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.092704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.093050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.093079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.093456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.093485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.093846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.093875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.169 [2024-11-20 15:40:28.094238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.169 [2024-11-20 15:40:28.094268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.169 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.094503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.094537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.094937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.094968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.095320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.095349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.095732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.095761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.096114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.096143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.096429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.096460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.096846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.096877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.097227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.097258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.097633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.097663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.098019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.098049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.445 qpair failed and we were unable to recover it. 00:30:39.445 [2024-11-20 15:40:28.098286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.445 [2024-11-20 15:40:28.098317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.098767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.098796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.099152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.099203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.099607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.099636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.099992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.100021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.100430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.100460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.100824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.101232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.101263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.101509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.101538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.101899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.101927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.102298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.102328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.102701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.102729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.103096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.103125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.103486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.103516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.103890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.103920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.104111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.104140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.104437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.104466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.104725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.104753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.105107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.105136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.105505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.105535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.105900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.105934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.106289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.106319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.106719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.106747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.107075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.107109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.107488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.107517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.107893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.107922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.108288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.108317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.108642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.108670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.109030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.109059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.109422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.109451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.109861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.110226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.110255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.446 qpair failed and we were unable to recover it. 00:30:39.446 [2024-11-20 15:40:28.110606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.446 [2024-11-20 15:40:28.110636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.110998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.111026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.111350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.111381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.111739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.111767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.112121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.112149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.112574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.112602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.112974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.113003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.113242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.113271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.113637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.113666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.114032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.114060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.114417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.114874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.114903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.115285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.115314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.115687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.115715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.116079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.116107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.116476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.116506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.116798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.116826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.117188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.117217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.117575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.117602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.117968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.117996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.118366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.118396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.118650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.118681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.118957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.118986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.119332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.119361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.119721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.119749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.120116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.120145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.120554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.120584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.120949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.120978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.121346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.121383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.121751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.121780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.122036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.122427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.122457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.447 qpair failed and we were unable to recover it. 00:30:39.447 [2024-11-20 15:40:28.122812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.447 [2024-11-20 15:40:28.122841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.123208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.123237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.123586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.123615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.123852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.123880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.124250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.124280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.124618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.124648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.125009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.125037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.125385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.125416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.125771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.125799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.126180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.126210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.126575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.126604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.126969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.126997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.127337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.127368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.127730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.127759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.128023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.128051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.128426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.128456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.128817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.128845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.129209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.129239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.129580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.129608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.129967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.129996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.130339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.130369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.130722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.130751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.131121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.131150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.131537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.131567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.131936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.131965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.132323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.132353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.132726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.132754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.133113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.133141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.133501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.133530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.133775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.133803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.134114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.134144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.134518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.134547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.448 [2024-11-20 15:40:28.134906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.448 [2024-11-20 15:40:28.134935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.448 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.135322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.135352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.135596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.135624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.135979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.136007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.136396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.136433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.136789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.136817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.136984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.137013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.137421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.137451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.137811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.137840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.138196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.138225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.138577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.138605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.138980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.139009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.139452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.139482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.139892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.139920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.140196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.140226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.140575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.140604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.140971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.140999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.141262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.141291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.141693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.141722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.142094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.142122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.142604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.142634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.142992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.143022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.143479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.143834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.143863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.144238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.144269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.144640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.144670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.145041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.145070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.145522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.145554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.145895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.145926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.146095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.146124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.146535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.146565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.146931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.449 [2024-11-20 15:40:28.146961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.449 qpair failed and we were unable to recover it. 00:30:39.449 [2024-11-20 15:40:28.147312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.147343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.147599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.147631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.148015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.148045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.148386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.148417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.148807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.148836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.149057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.149085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.149460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.149491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.149745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.149776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.150178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.150207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.150574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.150604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.150828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.150856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.151225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.151262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.151631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.151666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.151938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.151967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.152444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.152475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.152819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.152848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.153232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.153262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.153624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.153654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.154029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.154057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.154312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.154342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.154717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.154746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.155165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.155196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.155570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.155599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.155935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.155965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.156415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.156446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.156728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.156756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.157167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.157198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.157577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.157605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.158003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.158351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.158382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.450 [2024-11-20 15:40:28.158750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.450 [2024-11-20 15:40:28.158779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.450 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.159113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.159143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.159524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.159552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.159897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.159925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.160252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.160282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.160502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.160531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.160863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.160892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.161239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.161270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.161633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.161662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.161939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.161968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.162346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.162376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.162756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.162787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.163146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.163186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.163541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.163569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.163931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.163959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.164180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.164210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.164448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.164477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.164811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.164840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.165204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.165235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.165585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.165614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.165997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.166026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.166463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.166493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.166851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.166886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.167131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.167169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.167539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.167568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.167783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.167811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.168182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.168212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.168583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.168612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.168976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.169005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.169374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.169404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.169628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.169656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.170021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.170051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.170432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.451 [2024-11-20 15:40:28.170462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.451 qpair failed and we were unable to recover it. 00:30:39.451 [2024-11-20 15:40:28.170825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.170855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.171263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.171293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.171666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.171695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.172039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.172068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.172449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.172479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.172707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.172737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.172970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.173000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.173399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.173430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.173799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.174177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.174208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.174562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.174591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.174969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.174997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.175375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.175411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.175778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.175808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.176211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.176241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.176619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.176649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.176884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.176914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.177288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.177318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.177678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.177707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.178077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.178106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.178483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.178514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.178896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.178925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.179288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.179326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.179704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.179732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.179971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.180003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.180366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.180397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.180811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.180840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.181215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.181245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.181614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.181644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.181858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.181896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.182270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.182300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.182684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.182713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.183091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.183120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.183346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.183376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.183785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.183816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.184075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.452 [2024-11-20 15:40:28.184104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.452 qpair failed and we were unable to recover it. 00:30:39.452 [2024-11-20 15:40:28.184492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.184523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.184760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.184788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.185195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.185225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.185489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.185519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.185790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.185822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.186192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.186224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.186585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.186618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.186862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.186891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.187283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.187313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.187680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.187710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.188028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.188056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.188474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.188506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.188883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.188913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.189263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.189293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.189666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.189696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.189955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.189988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.190352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.190382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.190750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.190780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.191052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.191081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.191504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.191534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.191911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.191941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.192326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.192358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.192726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.192756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.193104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.193133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.193389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.193419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.193816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.193845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.194117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.194146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.194531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.194566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.194973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.195002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.195356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.195386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.195751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.195779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.196165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.196195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.196602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.196632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.197012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.197047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.197401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.197431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.197678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.197706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.198085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.453 [2024-11-20 15:40:28.198117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.453 qpair failed and we were unable to recover it. 00:30:39.453 [2024-11-20 15:40:28.198525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.198555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.198814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.198842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.199197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.199227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.199490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.199519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.199754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.200179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.200208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.200625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.200654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.200908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.200942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.201325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.201354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.201746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.201777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.202194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.202225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.202536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.202565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.202828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.202857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.203220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.203251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.203515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.203544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.203908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.203938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.204313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.204342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.204711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.204740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.205074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.205103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.205487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.205518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.205865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.206116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.206147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.206427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.206460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.206868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.207226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.207258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.207610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.208012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.208042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.208434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.208465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.208903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.209274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.209304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.209672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.209700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.210072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.210101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.210406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.210436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.210772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.210800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.211060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.211089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.211456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.211488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.454 [2024-11-20 15:40:28.211819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.454 [2024-11-20 15:40:28.211854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.454 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.212262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.212293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.212666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.212695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.212822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.212851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.213107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.213136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.213406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.213435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.213778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.213808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.214076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.214106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.214478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.214508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.214761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.214791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.215149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.215198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.215547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.215578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.215939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.215968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.216343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.216373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.216739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.216769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.217141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.217177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.217538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.217567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.217792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.217820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.218103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.218134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.218441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.218471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.218734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.218763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.219048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.219077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.219480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.219509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.219878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.219906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.220337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.220367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.220734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.220762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.221112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.221139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.221505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.221535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.221892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.221921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.222221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.222250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.222686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.223052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.455 [2024-11-20 15:40:28.223081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.455 qpair failed and we were unable to recover it. 00:30:39.455 [2024-11-20 15:40:28.223453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.223484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.223854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.223882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.224236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.224266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.224608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.224637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.224997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.225025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.225408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.225439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.225805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.225834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.226105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.226134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.226531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.226560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.226922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.226951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.227340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.227370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.227733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.227762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.228182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.228656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.228684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.229055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.229083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.229441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.229471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.229805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.229834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.230220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.230250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.230613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.230642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.231000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.231029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.231461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.231490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.231817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.231846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.232099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.232128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.232444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.232474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.232853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.232882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.233253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.233284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.233546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.233574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.233908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.233937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.234279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.234309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.234675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.234703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.235067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.235095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.235449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.235480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.235851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.235879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.236187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.236217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.236629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.237021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.237094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.237330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.237360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.237741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.456 [2024-11-20 15:40:28.237769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.456 qpair failed and we were unable to recover it. 00:30:39.456 [2024-11-20 15:40:28.238009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.238038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.238428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.238458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.238814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.238844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.239247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.239611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.239640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.240012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.240040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.240324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.240356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.240745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.240774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.241133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.241172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.241533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.241562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.241951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.241979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.242329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.242361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.242726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.242754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.243058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.243086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.243429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.243461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.243822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.243851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.244111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.244139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.244523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.244553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.244910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.244938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.245300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.245329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.245674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.245702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.246068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.246097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.246467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.246864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.246893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.247260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.247291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.247660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.247690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.248059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.248089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.248502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.248858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.248888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.249315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.249346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.249618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.249646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.250063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.250091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.250477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.250859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.250889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.251287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.251318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.251664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.251693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.252062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.252092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.252450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.252486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.457 qpair failed and we were unable to recover it. 00:30:39.457 [2024-11-20 15:40:28.252836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.457 [2024-11-20 15:40:28.252866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.253240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.253270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.253632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.253661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.254018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.254047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.254417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.254447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.254686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.254715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.255096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.255125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.255538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.255569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.255921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.255951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.256313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.256343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.256634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.256662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.257028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.257058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.257475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.257504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.257871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.257902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.258280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.258310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.258671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.258702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.259058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.259087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.259456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.259486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.259738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.259767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.260009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.260040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.260345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.260376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.260749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.260779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.261190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.261220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.261591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.261619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.261984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.262378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.262409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.262847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.262877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.263253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.263283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.263643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.263672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.264042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.264071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.264446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.264476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.264826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.264856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.265106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.265135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.265530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.265559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.265794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.265822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.266187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.266217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.266530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.266559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.266902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.266931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.267297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.458 [2024-11-20 15:40:28.267327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.458 qpair failed and we were unable to recover it. 00:30:39.458 [2024-11-20 15:40:28.267689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.267723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.268064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.268092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.268469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.268500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.268865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.268894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.269254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.269284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.269642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.269670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.270080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.270108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.270478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.270508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.270862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.270890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.271242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.271273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.271616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.271644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.272008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.272036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.272415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.272444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.272810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.272839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.273196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.273226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.273494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.273525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.273890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.273919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.274188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.274218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.274519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.274547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.274938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.274966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.275394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.275423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.275795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.275823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.276175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.276204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.276625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.276654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.276985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.277014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.277299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.277330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.277705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.277733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.278136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.278173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.278535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.278563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.278937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.278966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.279364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.279395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.279739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.279768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.280124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.280152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.280534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.280562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.280909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.280938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.281312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.281341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.281731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.281759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.282109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.282138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.282494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.282524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.459 qpair failed and we were unable to recover it. 00:30:39.459 [2024-11-20 15:40:28.282861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.459 [2024-11-20 15:40:28.282890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.283270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.283307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.283558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.283590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.283992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.284022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.284382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.284411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.284785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.284813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.284977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.285005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.285333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.285363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.285740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.285768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.286086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.286116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.286469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.286500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.286865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.286893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.287172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.287201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.287574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.287603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.287963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.287991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.288256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.288290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.288724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.288754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.288987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.289015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.289271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.289301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.289664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.289693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.290064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.290093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.290452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.290482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.290839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.290869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.291233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.291262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.292014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.292042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.292419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.292449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.292825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.292853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.293235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.293266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.293626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.293656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.294016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.294044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.294281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.294311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.294696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.294724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.295087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.295115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.295478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.460 [2024-11-20 15:40:28.295508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.460 qpair failed and we were unable to recover it. 00:30:39.460 [2024-11-20 15:40:28.295865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.295894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.296237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.296266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.296632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.296662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.297021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.297050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.297384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.297415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.297648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.297680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.298036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.298072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.298411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.298442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.298807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.298836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.299252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.299283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.299647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.299676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.300026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.300056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.300421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.300451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.300818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.300848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.301214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.301245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.301622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.301651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.302010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.302041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.302390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.302422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.302783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.302820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.303153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.303191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.303556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.303586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.304032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.304063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.304418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.304450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.304810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.304839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.305099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.305128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.305541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.305897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.305928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.306291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.306321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.306773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.306802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.307032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.307065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.307324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.307356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.307610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.307639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.308035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.308066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.308438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.308470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.308834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.308862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.309227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.309258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.309663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.309692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.310039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.310069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.310500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.310530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.310786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.310816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.311177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.311208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.461 [2024-11-20 15:40:28.311587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.461 [2024-11-20 15:40:28.311616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.461 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.311958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.311987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.312302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.312332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.312557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.312588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.312841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.312872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.313227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.313609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.313638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.313978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.314008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.314391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.314421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.314787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.314816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.315219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.315249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.315597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.315626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.315996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.316025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.316399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.316430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.316800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.316828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.317213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.317244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.317619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.317649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.317981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.318010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.318360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.318392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.318764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.318794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.319196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.319225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.319537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.319565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.319810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.319839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.320212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.320243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.320485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.320516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.320898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.320927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.321286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.321317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.321681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.321709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.322086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.322115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.322456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.322485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.322875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.322906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.323150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.323188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.323576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.323605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.324024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.324052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.324460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.324490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.324841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.324871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.325140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.325190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.325542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.325572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.326051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.326079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.326408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.326438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.326777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.326806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.327177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.327206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.327630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.327658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.327919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.462 [2024-11-20 15:40:28.327950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.462 qpair failed and we were unable to recover it. 00:30:39.462 [2024-11-20 15:40:28.328331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.328361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.328723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.328759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.329115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.329144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.329516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.329546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.329908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.329937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.330305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.330335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.330699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.330730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.331079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.331108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.331491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.331520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.331911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.331941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.332313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.332343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.332708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.332737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.333094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.333124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.333510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.333539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.333899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.333927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.334285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.334315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.334682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.334711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.335073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.335101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.335472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.335502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.335863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.335892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.336269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.336299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.336663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.336693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.337027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.337055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.337464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.337495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.337829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.337856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.338190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.338219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.338640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.338669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.338915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.338944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.339244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.339275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.339664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.339694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.340074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.340103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.340494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.340524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.340775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.340807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.341180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.341212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.341488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.341516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.341895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.341927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.342280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.342311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.342529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.342557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.343020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.343410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.343440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.343802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.343830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.344181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.344217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.344540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.344568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.463 [2024-11-20 15:40:28.344922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.463 [2024-11-20 15:40:28.344950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.463 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.345298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.345330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.345696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.345725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.346086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.346117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.346512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.346543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.346889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.346919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.347291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.347322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.347713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.347742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.347989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.348018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.348460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.348490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.348848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.348878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.349240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.349271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.349661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.349960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.350308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.350337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.350700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.350731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.351085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.351116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.351504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.351536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.351896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.351924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.352278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.352308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.352659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.352688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.353047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.353076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.353399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.353429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.353807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.353836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.354197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.354227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.354619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.354985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.355015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.355275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.355305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.355683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.355712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.356061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.356090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.356476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.356507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.356912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.356941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.357236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.357265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.357633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.357662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.357996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.358025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.358385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.358415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.358780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.358808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.359247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.359276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.464 qpair failed and we were unable to recover it. 00:30:39.464 [2024-11-20 15:40:28.359636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.464 [2024-11-20 15:40:28.359670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.360031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.360059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.360419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.360450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.360806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.360834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.361280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.361310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.361752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.361781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.362088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.362116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.362446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.362476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.362813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.362841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.363149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.363186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.363534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.363562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.363924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.363952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.364323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.364355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.364733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.364762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.365130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.365169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.365541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.365570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.365936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.365965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.366335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.366364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.366720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.366749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.367102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.367131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.367549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.367579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.367921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.367950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.368308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.368337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.368695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.368724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.369177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.369208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.369559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.369589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.369952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.369980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.370342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.370373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.370737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.370765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.371152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.371192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.371530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.371560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.371922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.371951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.372309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.372340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.372701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.372729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.373092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.373119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.373477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.373508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.373872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.373901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.374263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.374665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.374693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.375051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.375079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.375431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.375467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.375799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.375829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.376085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.376114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.376492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.376522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.376832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.376861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.377253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.465 [2024-11-20 15:40:28.377284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.465 qpair failed and we were unable to recover it. 00:30:39.465 [2024-11-20 15:40:28.377644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.377672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.378037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.378067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.378423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.378783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.378811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.379184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.379214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.379568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.379598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.379964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.379993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.380352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.380383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.380739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.380767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.381148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.381187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.381573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.381602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.381946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.381976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.382393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.382423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.382779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.382808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.383175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.383204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.383452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.383485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.383861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.383890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.384266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.384295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.384673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.384703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.385051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.385079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.385457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.385487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.385846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.385875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.386240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.386270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.386647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.386676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.386937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.386965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.387316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.387346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.387715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.387744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.388108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.388136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.388523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.388552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.388912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.388942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.466 [2024-11-20 15:40:28.389304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.466 [2024-11-20 15:40:28.389336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.466 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.389648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.389681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.389958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.390334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.390364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.390725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.390760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.391114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.391145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.391561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.391591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.391942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.391973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.392328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.392358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.392653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.392682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.393030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.393058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.393406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.393436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.393697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.393724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.394188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.394219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.394956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.394985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.395351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.395382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.395813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.395841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.396215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.396246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.396625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.396655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.397057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.397085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.397361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.397390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.397755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.397785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.398056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.398085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.398419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.398450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.398894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.398923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.399278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.399308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.399675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.399704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.400072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.400101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.400453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.400483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.400845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.400875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.401312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.401350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.401703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.401733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.401988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.742 [2024-11-20 15:40:28.402017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.742 qpair failed and we were unable to recover it. 00:30:39.742 [2024-11-20 15:40:28.402386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.402417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.402785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.402815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.403180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.403211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.403567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.403597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.403956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.403984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.404354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.404384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.404739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.404770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.405026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.405055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.405437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.405469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.405842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.405872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.406130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.406203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.406585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.406615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.407007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.407274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.407307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.407687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.407717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.407875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.407907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.408270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.408300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.408679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.408707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.409079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.409107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.409467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.409498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.409837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.409867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.410248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.410278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.410628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.410657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.411016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.411308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.411339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.411612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.411642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.411988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.412017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.412384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.412415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.412773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.412803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.413153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.413204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.413511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.413540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.413904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.413934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.414270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.414303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.414680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.414711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.414997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.415026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.415415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.415445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.415796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.743 [2024-11-20 15:40:28.415826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.743 qpair failed and we were unable to recover it. 00:30:39.743 [2024-11-20 15:40:28.415990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.416023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.416385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.416417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.416794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.416825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.417177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.417209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.417578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.417961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.417991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.418371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.418402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.418760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.418791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.419170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.419201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.419565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.419594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.419936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.419965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.420332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.420363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.420717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.420746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.421117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.421145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.421555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.421586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.421959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.421987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.422358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.422388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.422754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.422784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.423135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.423171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.423497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.423526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.423860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.423892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.424216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.424248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.424599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.424629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.425000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.425029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.425429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.425459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.425903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.425933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.426315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.426346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.426739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.426768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.427119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.427148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.427622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.427651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.428023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.428053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.428393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.428798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.428828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.429195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.429225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.429593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.429621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.430005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.430035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.430388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.430418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.744 qpair failed and we were unable to recover it. 00:30:39.744 [2024-11-20 15:40:28.430848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.744 [2024-11-20 15:40:28.430878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.431227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.431257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.431639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.431669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.432034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.432074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.432412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.432442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.432805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.432835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.433191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.433223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.433605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.433635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.433992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.434021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.434271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.434304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.434678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.434708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.434972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.435001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.435302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.435333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.435680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.435712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.435988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.436017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.436416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.436447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.436820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.436849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.437203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.437236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.437611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.437641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.437995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.438024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.438371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.438400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.438766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.438795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.439174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.439204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.439587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.439626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.439990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.440020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.440370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.440400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.440761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.440790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.441135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.441180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.441522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.441551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.441988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.442018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.442469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.442502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.442844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.442872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.443239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.443269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.443503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.443535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.443919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.443948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.444414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.444445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.444799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.444828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.445203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.445232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.745 qpair failed and we were unable to recover it. 00:30:39.745 [2024-11-20 15:40:28.445607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.745 [2024-11-20 15:40:28.445635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.445940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.446368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.446399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.446751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.446781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.447130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.447165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.447457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.447492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.447848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.447877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.448256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.448286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.448653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.448682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.449123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.449154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.449515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.449545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.449780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.449808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.450187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.450217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.450648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.450677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.451061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.451092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.451431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.451462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.451812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.451842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.452197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.452228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.452589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.452618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.452986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.453016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.453386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.453417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.453778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.453807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.454199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.454230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.454606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.454635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.454994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.455023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.455271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.455305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.455656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.455686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.456048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.456077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.456443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.456474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.456846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.456877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.457248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.457297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.457565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.457594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.457978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.746 [2024-11-20 15:40:28.458008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.746 qpair failed and we were unable to recover it. 00:30:39.746 [2024-11-20 15:40:28.458382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.458413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.458629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.458658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.459000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.459029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.459380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.459410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.459830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.460109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.460138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.460519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.460550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.460936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.460966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.461333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.461363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.461723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.461751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.462109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.462139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.462524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.462555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.462912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.462949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.463303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.463341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.463582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.463959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.463988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.464384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.464414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.464662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.464692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.465029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.465059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.465466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.465497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.465855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.465883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.466147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.466184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.466591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.466619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.466990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.467020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.467391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.467423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.467820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.467849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.468211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.468242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.468517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.468545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.468917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.468946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.469338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.469370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.469623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.469654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.470047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.470077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.470454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.470485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.470939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.470968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.471324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.471355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.471722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.471753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.472101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.472131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.472537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.747 [2024-11-20 15:40:28.472566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.747 qpair failed and we were unable to recover it. 00:30:39.747 [2024-11-20 15:40:28.472979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.473008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.473383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.473414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.473762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.473792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.474180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.474210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.474550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.474579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.474941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.474970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.475348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.475378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.475712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.475740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.476114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.476143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.476505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.476535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.476968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.476998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.477333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.477363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.477740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.477770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.478143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.478183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.478518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.478552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.478913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.478942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.479358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.479388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.479769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.479798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.480167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.480197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.480555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.480585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.480931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.480960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.481333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.481364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.481716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.481745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.482147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.482511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.482540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.482902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.482933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.483280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.483310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.483678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.483707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.483943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.483971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.484340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.484370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.484686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.484716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.485072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.485100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.485492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.485522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.485865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.485896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.486137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.486180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.486550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.486578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.486969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.487326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.748 [2024-11-20 15:40:28.487357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.748 qpair failed and we were unable to recover it. 00:30:39.748 [2024-11-20 15:40:28.487721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.487751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.488155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.488193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.488549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.488579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.488947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.488976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.489340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.489370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.489629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.489658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.490054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.490084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.490321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.490353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.490703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.490732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.491090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.491119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.491480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.491509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.491866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.491896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.492260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.492291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.492655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.492684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.493018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.493046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.493434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.493796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.493830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.494203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.494233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.494613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.494643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.495017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.495047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.495397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.495427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.495810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.495839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.496182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.496213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.496609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.496637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.497022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.497051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.497409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.497800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.497828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.498207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.498237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.498478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.498506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.498866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.498895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.499268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.499299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.499669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.499697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.500055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.500084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.500464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.500494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.500846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.500875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.501245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.501276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.501649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.501679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.502035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.749 [2024-11-20 15:40:28.502063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.749 qpair failed and we were unable to recover it. 00:30:39.749 [2024-11-20 15:40:28.502454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.502484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.502814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.502844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.503199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.503230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.503565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.503595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.503953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.503982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.504332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.504364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.504620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.504650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.504995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.505023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.505402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.505432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.505794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.505824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.506188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.506218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.506596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.506626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.506970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.507000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.507336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.507367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.507754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.508118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.508147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.508559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.508589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.508943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.508972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.509328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.509372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.509738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.509767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.510122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.510153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.510447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.510476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.510833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.510862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.511217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.511247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.511497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.511528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.511899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.511927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.512287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.512316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.512693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.512723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.513096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.513125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.513532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.513562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.513901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.513931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.514314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.514344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.514699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.514728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.515074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.515105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.515484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.515514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.515861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.515891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.516127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.516167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.750 qpair failed and we were unable to recover it. 00:30:39.750 [2024-11-20 15:40:28.516526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.750 [2024-11-20 15:40:28.516555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.516923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.516952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.517293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.517323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.517574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.517602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.517966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.517994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.518366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.518396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.518849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.518880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.519234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.519624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.519654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.520016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.520045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.520411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.520788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.520817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.521180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.521210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.521575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.521604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.521962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.521991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.522351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.522736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.522765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.523120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.523149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.523507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.523536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.523879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.523909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.524274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.524304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.524671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.524714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.525048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.525077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.525500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.525529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.525863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.525893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.526238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.526269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.526626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.526655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.527020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.527049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.527400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.527429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.527794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.527824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.528184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.528213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.528568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.528597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.528965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.528994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.751 [2024-11-20 15:40:28.529343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.751 [2024-11-20 15:40:28.529373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.751 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.529737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.529766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.530141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.530181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.530558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.530965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.530994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.531315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.531347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.531762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.531791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.532150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.532188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.532530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.532559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.532922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.532951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.533310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.533340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.533679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.533708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.534100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.534446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.534478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.534835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.534865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.535167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.535200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.535580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.535609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.535966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.535995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.536453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.536808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.536837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.537205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.537235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.537620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.538002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.538031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.538294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.538323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.538700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.538729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.538981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.539009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.539343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.539373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.539741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.539771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.540149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.540195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.540576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.540605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.540959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.540987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.541378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.541408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.541759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.541787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.542156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.542193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.542540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.542570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.542921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.542950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.543302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.543331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.543692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.543721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.752 [2024-11-20 15:40:28.544080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.752 [2024-11-20 15:40:28.544110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.752 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.544452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.544482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.544844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.544873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.545226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.545257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.545674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.545704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.546063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.546092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.546354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.546384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.546761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.546789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.547151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.547188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.547537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.547567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.547921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.547949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.548304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.548703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.548732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.549134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.549173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.549523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.549552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.549839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.549867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.550246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.550279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.550645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.550674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.550917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.550946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.551295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.551326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.551690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.551718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.552096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.552125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.552494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.552524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.552787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.552815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.553168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.553199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.553559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.553589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.553952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.553980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.554326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.554356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.554717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.555120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.555150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.555484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.555521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.555856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.555887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.556249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.556279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.556636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.556666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.557035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.557063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.557411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.557442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.557790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.557820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.558183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.558213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.753 [2024-11-20 15:40:28.558457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.753 [2024-11-20 15:40:28.558486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.753 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.558878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.558907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.559206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.559235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.559627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.559657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.560034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.560064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.560423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.560452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.560796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.560825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.561189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.561219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.561583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.561612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.561862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.561891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.562252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.562283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.562664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.562693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.563043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.563073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.563420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.563449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.563812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.563841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.564219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.564251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.564607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.564636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.565008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.565037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.565389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.565419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.565788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.565817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.566186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.566216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.566554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.566582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.566954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.566983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.567353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.567383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.567645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.567673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.568048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.568076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.568422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.568452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.568830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.568859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.569264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.569605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.569634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.569989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.570018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.570323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.570352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.570688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.570723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.570968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.570997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.571382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.571412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.571777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.571805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.572152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.572202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.572474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.572506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.754 [2024-11-20 15:40:28.572843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-11-20 15:40:28.572871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.754 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.573240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.573270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.573529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.573558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.573841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.573869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.574198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.574228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.574582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.574612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.574873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.574901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.575318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.575347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.575598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.575627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.575982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.576012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.576388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.576419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.576785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.576814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.577190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.577220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.577569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.577598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.577964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.577993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.578264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.578293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.578663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.578692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.579061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.579089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.579439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.579469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.579828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.579858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.580226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.580257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.580621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.580651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.581007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.581035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.581387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.581417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.581787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.581816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.582188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.582219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.582562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.582592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.582990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.583329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.583359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.583723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.583751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.584123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.584151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.584384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.584412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.584774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.584804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.585176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.585560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.755 [2024-11-20 15:40:28.585599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.755 qpair failed and we were unable to recover it. 00:30:39.755 [2024-11-20 15:40:28.585957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.585985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.586352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.586383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.586736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.586765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.587146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.587185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.587512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.587542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.587901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.587930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.588284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.588315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.588677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.588705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.589065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.589095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.589538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.589568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.589932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.589961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.590328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.590358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.590727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.590755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.591115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.591145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.591509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.591539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.591905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.591933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.592299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.592328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.592674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.592702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.593082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.593111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.593480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.593509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.593754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.593786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.594080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.594109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.594497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.594528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.594886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.594915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.595284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.595316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.595688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.595716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.596074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.596105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.596500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.596531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.596904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.596932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.597288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.597319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.597691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.597719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.598114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.598496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.598527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.598762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.598790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.599122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.599150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.599508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.599539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.599877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.599906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.600260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.756 [2024-11-20 15:40:28.600291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.756 qpair failed and we were unable to recover it. 00:30:39.756 [2024-11-20 15:40:28.600622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.600652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.601006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.601040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.601403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.601433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.601802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.601830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.602218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.602248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.602626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.602654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.603016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.603044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.603386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.603415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.603781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.603810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.604190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.604220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.604564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.604594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.604970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.605000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.605401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.605432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.605792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.605820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.606182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.606210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.606519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.606548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.606909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.606936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.607201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.607230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.607626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.607655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.608014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.608042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.608411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.608441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.608805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.608833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.609195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.609226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.609583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.609612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.610036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.610064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.610404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.610434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.610779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.610807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.611173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.611202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.611551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.611581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.611941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.611971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.612333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.612363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.612721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.612749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.613110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.613138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.613516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.613546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.613910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.613938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.614303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.614334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.614753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.757 [2024-11-20 15:40:28.614782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.757 qpair failed and we were unable to recover it. 00:30:39.757 [2024-11-20 15:40:28.615138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.615174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.615514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.615543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.615900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.615928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.616293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.616324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.616697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.616731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.617070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.617098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.617460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.617489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.617837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.617865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.618242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.618272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.618649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.618678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.619029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.619060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.619406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.619435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.619684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.619715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.620043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.620073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.620479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.620509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.620871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.620900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.621169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.621205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.621532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.621560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.621939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.621969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.622320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.622351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.622683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.622711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.623077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.623106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.623475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.623505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.623869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.623900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.624265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.624294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.624655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.624684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.625034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.625063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.625420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.625450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.625805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.625833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.626097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.626126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.626487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.626517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.626877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.626911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.627275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.627305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.627659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.627687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.628056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.628427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.628457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.628679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.628710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.629092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.758 [2024-11-20 15:40:28.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.758 qpair failed and we were unable to recover it. 00:30:39.758 [2024-11-20 15:40:28.629464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.629493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.629827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.629856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.630229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.630623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.630653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.631017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.631046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.631470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 792951 Killed "${NVMF_APP[@]}" "$@" 00:30:39.759 [2024-11-20 15:40:28.631502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.631873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.631902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.632237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.632267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:39.759 [2024-11-20 15:40:28.632635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.632664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:39.759 [2024-11-20 15:40:28.633030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.633060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.759 [2024-11-20 15:40:28.633303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.633336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.759 [2024-11-20 15:40:28.633618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.633648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:39.759 [2024-11-20 15:40:28.634072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.634101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.634518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.634548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.634890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.634919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.635289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.635319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.635687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.635717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.635995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.636030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.636474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.636504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.636863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.636894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.637235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.637265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.637414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.637442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.637754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.637783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.638057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.638085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.638309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.638339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.638723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.638753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.639109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.639139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.639403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.639436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.639706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.639735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.640014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.640042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.640471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.640501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.640845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.640874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.641150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.641200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.641586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.641615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 [2024-11-20 15:40:28.641985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.759 [2024-11-20 15:40:28.642014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.759 qpair failed and we were unable to recover it. 00:30:39.759 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=793835 00:30:39.759 [2024-11-20 15:40:28.642360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.642392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 793835 00:30:39.760 [2024-11-20 15:40:28.642754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.642784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 793835 ']' 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.760 [2024-11-20 15:40:28.643150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.643188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.760 [2024-11-20 15:40:28.643569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.643599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.760 [2024-11-20 15:40:28.643967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.643998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 15:40:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:39.760 [2024-11-20 15:40:28.644344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.644378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.644636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.644671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.644911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.644941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.645209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.645239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.645612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.645643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.645896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.645926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.646172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.646202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.646571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.646600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.646966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.646996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.647350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.647388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.647742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.647772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.648142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.648193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.648588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.648625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.648985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.649016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.649304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.649336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.649593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.649623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.649983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.650014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.650263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.650298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.650687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.650719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.650968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.650998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.651378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.651410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.651677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.651708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.652092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.652123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.652515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.652546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.760 [2024-11-20 15:40:28.652903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.760 [2024-11-20 15:40:28.652932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.760 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.653276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.653307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.653684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.653720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.653946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.653979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.654328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.654360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.654622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.654651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.655005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.655035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.655419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.655450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.655861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.656239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.656272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.656556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.656586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.656961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.656993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.657345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.657379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.657761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.657791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.658180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.658212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.658589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.658632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.659045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.659073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.659313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.659345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.659743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.659772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.660042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.660071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.660482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.660514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.660880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.660909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.661264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.661294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.661663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.661692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.662075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.662104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.662512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.662542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.662909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.662938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.663303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.663334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.663659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.663688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.664045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.664074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.664474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.664821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.664849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.665231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.665263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.665513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.665542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.665793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.665825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.666171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.666200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.666612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.666641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.666895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.666926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.761 [2024-11-20 15:40:28.667365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.761 [2024-11-20 15:40:28.667397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.761 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.667642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.667671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.668069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.668100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.668508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.668538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.668810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.668842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.669239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.669271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.669654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.669685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.669950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.669978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.670432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.670463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.670830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.670859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.671220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.671251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.671593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.671622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.672010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.672038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.672326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.672356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.672764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.672795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.673204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.673236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.673633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.673661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.673965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.674002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.674417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.674447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.674833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.674863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.675249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.675627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.675656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.676029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.676057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.676455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.676485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.676751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.676780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.677143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.677185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.677608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.677638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.677988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.678017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.678413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.678444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.678703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.678733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.679106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.679135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.679510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.679541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.679918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.679949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.680208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.680238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.680624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.680652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.681062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.681091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.681455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.681486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.681871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.762 [2024-11-20 15:40:28.681900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.762 qpair failed and we were unable to recover it. 00:30:39.762 [2024-11-20 15:40:28.682276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.682308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.682693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.682721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.683112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.683142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.683419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.683448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.683847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.683876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.684248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.684277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.684549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.684577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.684951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.684981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.685238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.685268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.685527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.685556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.685794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.685823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:39.763 [2024-11-20 15:40:28.686203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.763 [2024-11-20 15:40:28.686234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:39.763 qpair failed and we were unable to recover it. 00:30:40.040 [2024-11-20 15:40:28.686601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.040 [2024-11-20 15:40:28.686632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.040 qpair failed and we were unable to recover it. 00:30:40.040 [2024-11-20 15:40:28.687006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.040 [2024-11-20 15:40:28.687036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.040 qpair failed and we were unable to recover it. 00:30:40.040 [2024-11-20 15:40:28.687420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.040 [2024-11-20 15:40:28.687449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.040 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.687721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.687750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.688094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.688123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.688523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.688555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.688803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.688832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.689233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.689271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.689686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.690061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.690091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.690460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.690493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.690944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.690974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.691317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.691346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.691716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.691744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.692121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.692151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.692404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.692435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.692783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.692813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.693218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.693248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.693511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.693539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.693937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.693966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.694346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.694377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.694724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.694755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.695113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.695143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.695432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.695463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.695874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.695905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.696290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.696321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.696706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.696737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.696889] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:40.041 [2024-11-20 15:40:28.696966] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.041 [2024-11-20 15:40:28.697102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.697134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.697600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.697632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.697912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.697947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.698243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.698273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.698646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.698677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.699069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.699099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.699231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.699260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.699665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.699695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.700072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.700102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.700499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.700530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.700891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.700922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.041 [2024-11-20 15:40:28.701299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.041 [2024-11-20 15:40:28.701329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.041 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.701680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.701711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.702099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.702129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.702576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.702607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.702891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.702921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.703193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.703224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.703649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.703678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.704057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.704087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.704447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.704480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.704725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.704758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.705121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.705151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.705573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.705604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.705860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.705890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.706261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.706292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.706537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.706567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.706953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.706984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.707349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.707380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.707640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.707669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.707906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.707934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.708314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.708345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.708743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.708772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.709146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.709193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.709434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.709464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.709693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.709726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.710012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.710043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.710476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.710506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.710876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.710906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.711151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.711193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.711585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.711614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.711971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.712003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.712359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.712390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.712742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.712771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.713156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.713218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.713465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.713495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.713868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.713898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.714256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.042 [2024-11-20 15:40:28.714287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.042 qpair failed and we were unable to recover it. 00:30:40.042 [2024-11-20 15:40:28.714734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.714764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.715216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.715248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.715631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.715660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.716035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.716066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.716427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.716459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.716822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.716851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.717272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.717303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.717658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.717688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.718071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.718101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.718363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.718394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.718766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.718795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.719199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.719230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.719579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.719609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.719872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.719903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.720291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.720322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.720644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.720674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.721018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.721048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.721307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.721338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.721774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.721804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.722183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.722214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.722593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.722622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.722997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.723027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.723453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.723484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.723859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.723889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.724250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.724280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.724668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.724704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.725118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.725147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.725542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.725573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.725946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.725976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.726374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.726405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.726757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.726788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.727152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.727205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.727416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.727448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.727840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.727871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.728247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.728279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.728646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.728675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.729097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.729126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.043 [2024-11-20 15:40:28.729523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.043 [2024-11-20 15:40:28.729555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.043 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.729947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.729978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.730340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.730370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.730743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.730774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.731130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.731167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.731438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.731467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.731830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.731859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.732227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.732259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.732627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.732656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.733023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.733053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.733408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.733438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.733790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.733818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.734196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.734225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.734617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.734645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.734988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.735385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.735416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.735748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.735777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.736141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.736181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.736586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.736614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.736979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.737008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.737391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.737423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.737798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.737827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.738203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.738247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.738494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.738525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.738900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.738929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.739204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.739233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.739500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.739529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.739897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.740275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.740312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.740714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.740984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.741012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.741359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.741389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.741762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.741791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.742128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.742156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.742572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.742601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.742959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.742988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.743346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.743376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.743643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.044 [2024-11-20 15:40:28.743672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.044 qpair failed and we were unable to recover it. 00:30:40.044 [2024-11-20 15:40:28.744048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.744076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.744440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.744470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.744806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.744836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.745097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.745126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.745532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.745562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.745825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.745853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.746204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.746234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.746576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.746605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.746972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.747002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.747235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.747267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.747656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.747685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.748061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.748089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.748465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.748495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.748862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.748891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.749143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.749541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.749570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.749783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.749811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.750050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.750080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.750330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.750363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.750724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.750756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.751116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.751145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.751560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.751591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.751960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.751992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.752428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.752460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.752833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.752861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.753212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.753242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.753608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.753637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.753867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.753895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.754281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.754313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.754681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.755087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.755122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.045 [2024-11-20 15:40:28.755383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.045 [2024-11-20 15:40:28.755413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.045 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.755758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.755788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.756150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.756189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.756538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.756568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.756944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.756973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.757339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.757369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.757806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.757835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.758191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.758222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.758611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.758640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.759088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.759387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.759418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.759667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.759695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.760116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.760146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.760540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.760569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.760936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.760966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.761202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.761232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.761633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.761662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.762031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.762441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.762470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.762742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.762769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.763066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.763094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.763461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.763499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.763865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.763894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.764264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.764293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.764661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.764689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.765076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.765104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.765473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.765504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.765868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.765897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.766276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.766308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.766648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.766678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.767065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.767094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.767355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.767388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.767792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.767821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.768191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.768221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.768592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.768622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.768879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.768909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.769280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.769310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.769677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.769707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.046 qpair failed and we were unable to recover it. 00:30:40.046 [2024-11-20 15:40:28.770044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.046 [2024-11-20 15:40:28.770073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.770452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.770482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.770849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.770880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.771248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.771278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.771646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.771676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.772043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.772072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.772451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.772481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.772730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.772758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.773134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.773170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.773533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.773563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.773801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.774247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.774625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.774653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.774858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.774886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.775277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.775307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.775695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.775725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.776074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.776104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.776521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.776551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.776807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.776835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.777239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.777268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.777634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.777663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.778025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.778053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.778439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.778470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.778839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.778869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.779240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.779270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.779632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.779661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.780029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.780059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.780434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.780465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.780838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.781256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.781286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.781660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.781689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.782051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.782080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.782342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.782372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.782752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.782781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.783024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.783053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.783437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.783468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.783837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.783866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.784229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.047 [2024-11-20 15:40:28.784260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-11-20 15:40:28.784632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.784661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.785013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.785044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.785416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.785447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.785789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.785819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.786185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.786216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.786628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.786918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.786947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.787300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.787329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.787585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.787614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.787989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.788018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.788377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.788406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.788778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.788807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.789181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.789212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.789578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.789607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.789964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.789994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.790344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.790374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.790750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.790778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.791157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.791207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.791536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.791566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.791939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.791969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.792328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.792358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.792736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.792765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.793130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.793166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.793617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.793646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.794010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.794038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.794264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.794295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.794675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.794704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.795065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.795095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.795365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.795395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.795625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.795657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.796005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.796041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.796399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.796428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.796794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.796822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-11-20 15:40:28.797190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.048 [2024-11-20 15:40:28.797220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.797462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.797490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.797892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.797904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.049 [2024-11-20 15:40:28.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.798358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.798388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.798630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.798658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.799013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.799045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.799404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.799434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.799806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.799834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.800187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.800217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.800631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.801044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.801079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.801440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.801472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.801867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.802247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.802277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.802650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.802680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.803061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.803091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.803523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.803555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.803924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.803954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.804208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.804237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.804605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.804635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.805014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.805044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.805449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.805833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.805862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.806256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.806287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.806651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.806682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.807063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.807093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.807476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.807506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.807892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.807922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.808285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.808317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.808650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.808679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.809059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.809352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.809382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.809789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.809817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.810188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.810220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.810478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.810508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.810913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.810943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.811349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.811379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.811758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.049 [2024-11-20 15:40:28.811787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-11-20 15:40:28.812178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.812209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.812566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.812597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.812963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.812992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.813444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.813474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.813845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.813874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.814118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.814146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.814526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.814556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.814932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.814961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.815343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.815375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.815766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.815797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.816179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.816208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.816572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.816602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.816979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.817013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.817389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.817419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.817802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.817832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.818215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.818244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.818489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.818517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.818880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.818908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.819290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.819321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.819696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.819725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.820086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.820117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.820479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.820509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.820882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.820912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.821171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.821358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.821387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.821771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.821800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.822174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.822206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.822586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.822616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.822883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.822911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.823142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.823181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.823580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.823609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.823991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.050 [2024-11-20 15:40:28.824019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.050 qpair failed and we were unable to recover it. 00:30:40.050 [2024-11-20 15:40:28.824361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.824391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.824772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.824801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.825171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.825201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.825586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.825616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.825974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.826002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.826456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.826880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.827280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.827310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.827687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.827716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.828033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.828063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.828415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.828445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.828806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.828836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.829194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.829224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.829487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.829518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.829875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.829904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.830264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.830294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.830643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.830671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.831020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.831050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.831289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.831322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.831707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.831737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.832098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.832133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.832580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.832610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.832962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.832991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.833298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.833328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.833697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.833726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.834083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.834111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.834466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.834497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.834856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.834886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.835246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.835276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.835658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.835687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.836044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.836071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.836421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.836452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.836817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.836848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.837150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.837190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.837537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.837566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.837924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.837953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.838322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.838353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.838721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.051 [2024-11-20 15:40:28.838751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.051 qpair failed and we were unable to recover it. 00:30:40.051 [2024-11-20 15:40:28.839123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.839151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.839545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.839574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.839957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.839985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.840336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.840365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.840816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.840846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.841205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.841236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.841644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.841672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.842032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.842061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.842371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.842402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.842805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.842834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.843203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.843234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.843610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.843640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.844000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.844385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.844415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.844756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.844784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.845139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.845176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.845500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.845528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.845893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.845924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.846280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.846311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.846570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.846599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.846956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.846984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.847352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.847383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.847715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.847750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.848109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.848138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.848501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.848530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.848885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.848914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.849274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.849303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.849651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.850041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.850072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.850419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.850451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.850752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.052 [2024-11-20 15:40:28.850794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.850803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.052 [2024-11-20 15:40:28.850814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.052 [2024-11-20 15:40:28.850822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.052 [2024-11-20 15:40:28.850823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b9[2024-11-20 15:40:28.850828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.052 0 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.851105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.851133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.851526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.851555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.851918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.851948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.852330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.052 [2024-11-20 15:40:28.852723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.052 [2024-11-20 15:40:28.852753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.052 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.852841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:40.053 [2024-11-20 15:40:28.853095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.853005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:40.053 [2024-11-20 15:40:28.853127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.853229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:40.053 [2024-11-20 15:40:28.853254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:40.053 [2024-11-20 15:40:28.853481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.853510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.853872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.853900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.854179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.854209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.854577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.854608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.854963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.854993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.855209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.855242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.855502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.855532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.855915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.855944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.856209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.856240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.856602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.856632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.856981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.857010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.857392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.857422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.857808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.857838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.858096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.858125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.858404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.858435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.858793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.858823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.859085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.859117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.859504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.859535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.859896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.859924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.860184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.860214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.860630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.860658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.860947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.860975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.861396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.861427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.861792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.861821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.862052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.862081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.862357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.862389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.862636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.862664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.863023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.863053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.863419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.863450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.863817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.863846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.864204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.864459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.864491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.864932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.864962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.865314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.865345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.865714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.053 [2024-11-20 15:40:28.865744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.053 qpair failed and we were unable to recover it. 00:30:40.053 [2024-11-20 15:40:28.865979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.866007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.866400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.866432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.866803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.866835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.867195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.867226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.867365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.867402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.867733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.867762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.868007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.868036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.868418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.868450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.868832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.868862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.869214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.869245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.869479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.869508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.869858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.869888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.870150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.870188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.870568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.870597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.871018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.871049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.871321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.871352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.871738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.871766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.871991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.872020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.872427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.872458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.872702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.872733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.873095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.873129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.873427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.873457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.873834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.873866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.874220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.874252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.874617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.874648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.875015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.875044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.875409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.875438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.875850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.875887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.876240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.876272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.876648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.876676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.877035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.877064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.877284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.877314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.877702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.877731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.054 [2024-11-20 15:40:28.878077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.054 [2024-11-20 15:40:28.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.054 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.878407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.878437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.878877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.878907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.879268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.879299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.879671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.879699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.879934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.879965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.880347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.880377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.880735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.880762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.881132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.881170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.881426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.881454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.881822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.881852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.882241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.882271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.882642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.882672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.882942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.882970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.883336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.883368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.883737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.883774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.884140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.884185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.884518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.884548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.884892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.884920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.885233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.885265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.885492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.885522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.885894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.885923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.886304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.886334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.886706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.886736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.887002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.887418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.887448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.887671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.887699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.888043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.888073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.888419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.888450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.888670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.888698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.889058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.889434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.889464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.889837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.889866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.890130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.890185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.890321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.890358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.890698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.890728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.891017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.891047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.891417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.891448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.055 [2024-11-20 15:40:28.891795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.055 [2024-11-20 15:40:28.891826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.055 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.892063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.892092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.892420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.892452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.892808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.892839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.893210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.893240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.893591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.893621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.893964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.893993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.894223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.894256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.894516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.894546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.894935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.894965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.895204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.895234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.895606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.895635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.895996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.896027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.896389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.896420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.896774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.896806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.897056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.897084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.897483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.897514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.897851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.898249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.898280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.898668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.898696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.899083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.899111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.899539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.899570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.899917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.899947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.900189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.900219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.900555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.900584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.900945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.900974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.901350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.901721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.901749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.901961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.901990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.902240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.902273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.902539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.902568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.902887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.902918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.903273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.903302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.903598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.903956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.903986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.904220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.904251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.904641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.904677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.904893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.904922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.056 qpair failed and we were unable to recover it. 00:30:40.056 [2024-11-20 15:40:28.905155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.056 [2024-11-20 15:40:28.905196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.905559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.905955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.905985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.906371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.906401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.906762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.906792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.907171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.907203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.907416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.907709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.907737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.908119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.908147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.908400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.908430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.908704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.908734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.909079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.909108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.909367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.909400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.909673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.909703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.909921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.909950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.910306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.910336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.910696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.910725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.911054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.911084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.911298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.911328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.911455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.911483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.911855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.911884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.912272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.912302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.912667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.912697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.913075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.913103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.913483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.913513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.913761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.913792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.914114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.914142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.914535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.914565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.914816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.914847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.915213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.915245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.915503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.915531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.915780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.915811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.916059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.916088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.916467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.916497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.916712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.916740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.917150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.917188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.917462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.917491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.057 [2024-11-20 15:40:28.917730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.057 [2024-11-20 15:40:28.917759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.057 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.918114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.918153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.918398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.918431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.918787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.918816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.919179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.919210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.919565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.919594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.919862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.919890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.920271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.920302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.920692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.921089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.921118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.921458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.921489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.921905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.921934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.922154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.922190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.922546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.922576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.922953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.922983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.923207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.923238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.923527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.923917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.923946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.924321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.924351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.924615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.924644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.925010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.925042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.925285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.925316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.925562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.925590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.926021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.926050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.926294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.926323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.926709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.926737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.927111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.927139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.927518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.927549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.927923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.927953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.928316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.928348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.928699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.928726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.929107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.929483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.929514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.929777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.929804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.930178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.930209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.930581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.930609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.930860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.930892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.931263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.931293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.058 [2024-11-20 15:40:28.931616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.058 [2024-11-20 15:40:28.931645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.058 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.931879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.931908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.932224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.932255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.932630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.932665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.932809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.932838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.933189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.933219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.933607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.933635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.933991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.934019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.934391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.934421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.934684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.934712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.935064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.935093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.935314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.935343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.935742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.935770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.936148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.936196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.936565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.936594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.936960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.936989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.937368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.937400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.937769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.938182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.938212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.938570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.938599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.938715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.938742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.939254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.939380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.939823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.939861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.940248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.940282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.940698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.940803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.941258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.941303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.941648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.941680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.942047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.942079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.942373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.942738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.943154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.943200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.943562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.943666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.944127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.944186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.944543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.059 [2024-11-20 15:40:28.944576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.059 qpair failed and we were unable to recover it. 00:30:40.059 [2024-11-20 15:40:28.945021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.945051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.945280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.945313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.945710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.945740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.946099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.946128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.946397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.946427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.946833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.946862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.947226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.947257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.947624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.947652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.948017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.948046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.948410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.948439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.948708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.948738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.949109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.949139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.949533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.949565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.949779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.949806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.950188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.950219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.950677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.950705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.951036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.951065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.951444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.951476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.951761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.951790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.952206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.952236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.952562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.952590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.952918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.952948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.953292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.953324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.953697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.953733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.954102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.954130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.954365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.954394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.954612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.954641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.954980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.955008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.955343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.955373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.955762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.955793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.956242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.956273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.956574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.956602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.060 qpair failed and we were unable to recover it. 00:30:40.060 [2024-11-20 15:40:28.956937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.060 [2024-11-20 15:40:28.956966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.957328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.957361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.957731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.957760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.958118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.958148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.958521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.958551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.958917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.958947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.959180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.959211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.959448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.959482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.959725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.959753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.960132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.960171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.960415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.960445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.960656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.960685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.960926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.960955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.961186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.961227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.961581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.961610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.961855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.961883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.962236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.962267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.962476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.962507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.962896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.962925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.963179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.963212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.963434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.963463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.963848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.963877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.964243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.964273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.964612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.964641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.965014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.965042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.965395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.965427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.965808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.965839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.966056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.966085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.966437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.966468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.966685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.966716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.967100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.967130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.967513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.967543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.967918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.967953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.968098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.968260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.968289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.968632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.968661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.969023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.969054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.969419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.969452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.969664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.061 [2024-11-20 15:40:28.969693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.061 qpair failed and we were unable to recover it. 00:30:40.061 [2024-11-20 15:40:28.970036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.970065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.970416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.970447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.970669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.970698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.971082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.971111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.971567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.971598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.971930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.971959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.972319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.972348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.972716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.972745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.973100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.973128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.973541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.973573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.973931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.973962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.974321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.974352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.974723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.974751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.974972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.975000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.975212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.975243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.975606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.975634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.976028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.976056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.976402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.976433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.976689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.976720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.977089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.977117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.977544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.977583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.977969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.977998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.978097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.978124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Write completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 Read completed with error (sct=0, sc=8) 00:30:40.062 starting I/O failed 00:30:40.062 [2024-11-20 15:40:28.978991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.062 [2024-11-20 15:40:28.979277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5ae00 is same with the state(6) to be set 00:30:40.062 [2024-11-20 15:40:28.980018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.980127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.980563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.980670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.981123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.981182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.062 [2024-11-20 15:40:28.981565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.062 [2024-11-20 15:40:28.981609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.062 qpair failed and we were unable to recover it. 00:30:40.063 [2024-11-20 15:40:28.981832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.063 [2024-11-20 15:40:28.981861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.063 qpair failed and we were unable to recover it. 00:30:40.063 [2024-11-20 15:40:28.982417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.063 [2024-11-20 15:40:28.982524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.063 qpair failed and we were unable to recover it. 00:30:40.063 [2024-11-20 15:40:28.982834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.063 [2024-11-20 15:40:28.982871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.063 qpair failed and we were unable to recover it. 00:30:40.063 [2024-11-20 15:40:28.983188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.063 [2024-11-20 15:40:28.983220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.063 qpair failed and we were unable to recover it. 00:30:40.063 [2024-11-20 15:40:28.983496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.063 [2024-11-20 15:40:28.983527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.063 qpair failed and we were unable to recover it. 00:30:40.063 [2024-11-20 15:40:28.983922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.063 [2024-11-20 15:40:28.983953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.063 qpair failed and we were unable to recover it. 00:30:40.367 [2024-11-20 15:40:28.984306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.367 [2024-11-20 15:40:28.984339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.367 qpair failed and we were unable to recover it. 00:30:40.367 [2024-11-20 15:40:28.984714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.367 [2024-11-20 15:40:28.984743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.367 qpair failed and we were unable to recover it. 00:30:40.367 [2024-11-20 15:40:28.985130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.367 [2024-11-20 15:40:28.985168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.367 qpair failed and we were unable to recover it. 00:30:40.367 [2024-11-20 15:40:28.985500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.367 [2024-11-20 15:40:28.985528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.367 qpair failed and we were unable to recover it. 00:30:40.367 [2024-11-20 15:40:28.985929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.367 [2024-11-20 15:40:28.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.986186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.986217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.986610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.986639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.986885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.986915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.987280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.987311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.987654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.987686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.988052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.988082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.988407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.368 [2024-11-20 15:40:28.988446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.368 qpair failed and we were unable to recover it. 00:30:40.368 [2024-11-20 15:40:28.988662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.988691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.988912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.988941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.989297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.989698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.989727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.990100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.990131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.990519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.990549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.990918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.990948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.369 [2024-11-20 15:40:28.991309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.369 [2024-11-20 15:40:28.991340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.369 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.991728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.991757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.992109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.992138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.992550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.992582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.992923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.992953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.993317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.993347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.993741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.993769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.994131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.994169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.994554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.994584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.994818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.994846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.995102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.995133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.995506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.995537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.995894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.995923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.996314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.996344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.996701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.996745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.997106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.997135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.997422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.997457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.997799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.997828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.998201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.998231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.998482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.998510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.998849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.998877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.999212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.999242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:28.999603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:28.999631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:29.000001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:29.000031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:29.000382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:29.000412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:29.000647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.370 [2024-11-20 15:40:29.000676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.370 qpair failed and we were unable to recover it. 00:30:40.370 [2024-11-20 15:40:29.001037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.001066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.001406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.001436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.001805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.001834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.002181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.002210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.002433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.002461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.002708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.002737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.002880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.002907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.003283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.003314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.003675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.003703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.003944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.003972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.004330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.004360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.004719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.004748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.005111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.005139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.005531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.005560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.005927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.005955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.006329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.006359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.006704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.006732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.007089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.007117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.007500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.007529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.007905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.007933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.008196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.008231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.008611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.008640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.008738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.008766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.009019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.009124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.009642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.009750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.010047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.010085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.010531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.010637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.011035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.011072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.011400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.011433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.011683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.011713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.012136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.012179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.012513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.012543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.012765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.012793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.013171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.013202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.013568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.013597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.013948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.013977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.014345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.014379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.014647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.014963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.014993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.015335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.015365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.015693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.015722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.016082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.016111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.016394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.016425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.016817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.016846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.017106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.017135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.017534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.017564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.017941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.017970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.018349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.018383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.018595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.018623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.019001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.019251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.019281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.371 [2024-11-20 15:40:29.019614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.371 [2024-11-20 15:40:29.019643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.371 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.019995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.020023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.020428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.020457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.020553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.020581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.020982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.021010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.021361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.021391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.021762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.021791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.022117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.022145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.022538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.022569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.022793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.022823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.023140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.023178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.023314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.023342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.023712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.023740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.024088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.024117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.024494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.024525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.024899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.025231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.025261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.025616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.025655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.026013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.026048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.026272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.026303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.026680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.026709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.026943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.026971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.027383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.027414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.027763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.027791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.028155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.028193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.028561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.028590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.028949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.028977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.029464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.029494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.029853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.029882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.030102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.030130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.030528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.030559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.030804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.030833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.031216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.031246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.031615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.031646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.031858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.031887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.032254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.032284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.032652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.032682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.033028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.033056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.033405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.033436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.033800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.033828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.034197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.034230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.034603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.034633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.035005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.035033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.035378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.035410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.035710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.035738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.036094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.036122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.036551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.036582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.036946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.036973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.037329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.037359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.037737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.037766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.372 [2024-11-20 15:40:29.038132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.372 [2024-11-20 15:40:29.038175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.372 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.038531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.038561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.038901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.038929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.039292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.039322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.039540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.039569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.039867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.039895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.040283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.040313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.040675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.041068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.041097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.041472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.041510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.041861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.041890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.042100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.042129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.042501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.042530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.042887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.042916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.043290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.043322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.043569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.043597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.043841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.043869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.044242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.044273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.044603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.044631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.044999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.045027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.045273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.045306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.045528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.045558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.045807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.045836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.046179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.046209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.046439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.046468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.046826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.046855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.047113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.047141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.047390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.047421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.047796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.047825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.048219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.048248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.048583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.048612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.048831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.048859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.049221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.049252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.049588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.049616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.049984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.050012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.050408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.050447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.050775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.050811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.051153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.051209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.051624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.051653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.052014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.052043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.052399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.052430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.052794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.373 [2024-11-20 15:40:29.052823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.373 qpair failed and we were unable to recover it. 00:30:40.373 [2024-11-20 15:40:29.053181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.053211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.053481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.053510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.053876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.053905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.054290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.054319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.054700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.054730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.055113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.055141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.055520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.055550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.055924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.055954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.056214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.056246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.056637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.056667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.057059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.057428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.057458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.057825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.057855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.058217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.058247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.058605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.058635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.058998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.059026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.059426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.059455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.059810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.059839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.060190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.060231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.060623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.060651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.060908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.061295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.061326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.061700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.061729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.061959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.061992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.062244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.062273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.062671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.062706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.063036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.063064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.063471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.063500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.063744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.063772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.064124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.064153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.064424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.064455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.064805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.065060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.065089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.065426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.065458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.065802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.065832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.066048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.066084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.066295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.066326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.066695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.066725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.067063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.067093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.067479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.374 [2024-11-20 15:40:29.067510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.374 qpair failed and we were unable to recover it. 00:30:40.374 [2024-11-20 15:40:29.067759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.067791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.067891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.067921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.068150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.068204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.068396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.068425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.068777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.068805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.069198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.069229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.069587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.069616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.069841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.069868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.070115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.070143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.070528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.070558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.070921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.070950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.071304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.071336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.071676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.071705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.072084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.072112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.072468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.072501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.072856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.072885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.073148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.073201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.073430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.073458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.073688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.073716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.074100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.074128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.074480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.074511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.074882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.074920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.075210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.075246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.075610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.075638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.076016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.076045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.076411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.076441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.076792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.076821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.077180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.077209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.077572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.077599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.077968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.077996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.078346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.078375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.078608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.078636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.079044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.079072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.079517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.079547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.079850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.375 [2024-11-20 15:40:29.079879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.375 qpair failed and we were unable to recover it. 00:30:40.375 [2024-11-20 15:40:29.080216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.080247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.080609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.080638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.081013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.081040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.081443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.081475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.081837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.081865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.082093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.082121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.082517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.082548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.082893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.082923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.083297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.083327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.083686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.083715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.084077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.084106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.084334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.084365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.084613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.084646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.085036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.085066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.085311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.085340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.085583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.085612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.085973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.086003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.376 qpair failed and we were unable to recover it. 00:30:40.376 [2024-11-20 15:40:29.086341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.376 [2024-11-20 15:40:29.086370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.086719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.086747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.087110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.087138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.087510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.087539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.087908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.087936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.088209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.088241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.088354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.088386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc650c0 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.088906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.089015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.377 qpair failed and we were unable to recover it. 00:30:40.377 [2024-11-20 15:40:29.089415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.377 [2024-11-20 15:40:29.089524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.089957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.089994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.090225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.090280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.090535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.090580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.090954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.090984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.091190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.091220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.091488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.091516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.091865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.091894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.092251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.092282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.378 [2024-11-20 15:40:29.092646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.378 [2024-11-20 15:40:29.092676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.378 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.092920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.092953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.093189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.093221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.093601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.093631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.093949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.093978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.094338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.094372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.094756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.094784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.094997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.379 [2024-11-20 15:40:29.095025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.379 qpair failed and we were unable to recover it. 00:30:40.379 [2024-11-20 15:40:29.095410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.380 [2024-11-20 15:40:29.095441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.380 qpair failed and we were unable to recover it. 00:30:40.380 [2024-11-20 15:40:29.095786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.380 [2024-11-20 15:40:29.095816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.380 qpair failed and we were unable to recover it. 00:30:40.380 [2024-11-20 15:40:29.096185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.380 [2024-11-20 15:40:29.096215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.380 qpair failed and we were unable to recover it. 00:30:40.380 [2024-11-20 15:40:29.096558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.380 [2024-11-20 15:40:29.096586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.380 qpair failed and we were unable to recover it. 00:30:40.380 [2024-11-20 15:40:29.096842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.380 [2024-11-20 15:40:29.096874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.380 qpair failed and we were unable to recover it. 00:30:40.380 [2024-11-20 15:40:29.097093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.380 [2024-11-20 15:40:29.097123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.380 qpair failed and we were unable to recover it. 00:30:40.380 [2024-11-20 15:40:29.097311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.097341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.097721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.097750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.097995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.098027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.098339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.098370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.098751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.098779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.099013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.099041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.099391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.099421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.099751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.381 [2024-11-20 15:40:29.099781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.381 qpair failed and we were unable to recover it. 00:30:40.381 [2024-11-20 15:40:29.100149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.100187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.100540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.100569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.100932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.100960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.101334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.101365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.101721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.101750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.101983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.102012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.102363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.102393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.102738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.102768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.102889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.102921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.103252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.103282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.103378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.103405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.103782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.103811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.104187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.104225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.104591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.104620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.105000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.105028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.105413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.382 [2024-11-20 15:40:29.105444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.382 qpair failed and we were unable to recover it. 00:30:40.382 [2024-11-20 15:40:29.105810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.105839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.106185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.106215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.106574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.106603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.106977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.107006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.107386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.107416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.107790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.107819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.108190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.108220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.108465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.108493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.108829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.108858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.109080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.109112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.109534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.109565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.109810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.109838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.110061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.110090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.110472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.110504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.110887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.111130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.111171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.383 [2024-11-20 15:40:29.111533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.383 [2024-11-20 15:40:29.111564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.383 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.111925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.111954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.112295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.112325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.112673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.112703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.113065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.113092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.113464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.113494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.113858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.113889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.114266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.114298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.114673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.114701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.114815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.114847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.115200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.115229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.115556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.115585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.115954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.115982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.116220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.116249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.116550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.116579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.116936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.116964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.117325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.117356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.117739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.118140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.118207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.118460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.118489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.118856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.118892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.119243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.119274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.119654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.119691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.119906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.119935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.120312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.120341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.120716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.120744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.121114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.121143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.121520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.121549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.121890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.121919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.122277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.122307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.122668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.122698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.123061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.123091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.123366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.123397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.123769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.123797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.124174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.124204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.124408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.124436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.384 [2024-11-20 15:40:29.124786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.384 [2024-11-20 15:40:29.124814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.384 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.125150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.125187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.125544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.125573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.125937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.125966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.126322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.126351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.126749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.126778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.127021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.127049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.127417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.127447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.127673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.127705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.128077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.128106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.128360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.128390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.128646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.128676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.129062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.129095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.129439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.129469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.129819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.129848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.130218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.130249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.130470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.130498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.130887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.130916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.131283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.131315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.131561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.131593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.131944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.131973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.132338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.132368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.132805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.132833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.133103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.133131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.133474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.133512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.133895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.133923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.134282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.134312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.134548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.134577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.134944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.134973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.135359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.135390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.135680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.135708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.136075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.136103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.136476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.136506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.136620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.136652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.385 [2024-11-20 15:40:29.136874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.385 [2024-11-20 15:40:29.136903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.385 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.137184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.137213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.137586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.137614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.137845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.137873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.138258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.138289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.138515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.138543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.138835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.138863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.139232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.139262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.139615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.139644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.140018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.140046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.140308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.140337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.140565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.140593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.140997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.141025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.141259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.141289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.141666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.141694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.142063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.142091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.142460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.142489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.142718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.142748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.143000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.143029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.143435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.143464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.143825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.143853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.144174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.144204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.144365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.144396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.144679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.144709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.145105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.145134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.145546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.145575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.145963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.145992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.146346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.146376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.146752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.146781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.147016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.147044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.147426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.147792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.147822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.148183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.148214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.148580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.148609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.148940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.148969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.149341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.149371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.149622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.149650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.149899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.149928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.150300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.150329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.150686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.150715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.150931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.386 [2024-11-20 15:40:29.150959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.386 qpair failed and we were unable to recover it. 00:30:40.386 [2024-11-20 15:40:29.151323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.151354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.151709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.151738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.152090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.152118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.152595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.152626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.152980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.153009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.153233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.153263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.153611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.153642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.153877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.153909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.154263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.154295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.154685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.154714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.154904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.154932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.155295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.155326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.155604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.155632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.156008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.156037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.156368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.156398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.156764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.156792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.157149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.157188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.157532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.157562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.157931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.157959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.158316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.158345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.158709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.158738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.158889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.158918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.159290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.159322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.159734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.159762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.159974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.160002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.160390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.160420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.160790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.160819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.161032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.161060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.161456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.161486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.161898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.161933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.162240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.162270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.162640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.162668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.163009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.163045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.163309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.163343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.163708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.163738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.163954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.163983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.164348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.164377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.164773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.164802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.165075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.165104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.165206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.165236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.165615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.165644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.165989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.166018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.166387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.166416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.166783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.166813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.167179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.167210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.167536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.167565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.167790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.167819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.387 [2024-11-20 15:40:29.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.387 [2024-11-20 15:40:29.168209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.387 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.168553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.168583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.168795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.168824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.169178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.169208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.169581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.169610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.169975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.170003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.170398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.170430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.170830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.170860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.171220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.171250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.171611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.171639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.172020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.172049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.172443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.172473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.172841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.172870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.173078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.173107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.173451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.173802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.173832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.174211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.174243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.174601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.174629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.175003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.175032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.175374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.175403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.175768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.175797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.176146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.176184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.176414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.176450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.176675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.176703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.176921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.176951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.177321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.177352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.177561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.177589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.177965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.177994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.178226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.178257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.178613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.178641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.179067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.179097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.179346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.179377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.179747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.179776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.180142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.180180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.180558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.180587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.180955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.180984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.181384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.181416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.181648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.181677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.182079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.182108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.182462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.182492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.182739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.182771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.183142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.183181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.183613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.183643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.183994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.184023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.184348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.184377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.184763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.184792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.185150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.185187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.185455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.185484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.185857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.185886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.186258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.186290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.186521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.186551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.186960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.186991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.187355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.187385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.187605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.187633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.187870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.187899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.188124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.188153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.188512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.188541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.188881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.188919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.189140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.189180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.189522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.189552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.189900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.189929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.190295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.190327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.388 [2024-11-20 15:40:29.190708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.388 [2024-11-20 15:40:29.190743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.388 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.190994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.191023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.191207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.191238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.191644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.191673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.192043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.192071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.192444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.192474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.192816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.192846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.193039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.193067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.193254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.193284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.193664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.193694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.194087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.194116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.194381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.194641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.194670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.194926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.195304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.195335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.195564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.195592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.195809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.195837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.196183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.196213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.196585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.196614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.196971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.197001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.197429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.197459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.197902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.197931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.198209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.198241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.198588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.198618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.199006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.199412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.199443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.199635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.199663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.200064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.200093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.200337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.200367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.200583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.200613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.201012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.201041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.201413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.201444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.201799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.201827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.202237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.202267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.202593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.202621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.202852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.202880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.203263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.203293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.203537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.203565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.203813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.203842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.204081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.204110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.204543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.204579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.204832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.204864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.205219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.205633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.205663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.206036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.206064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.206458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.206489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.206843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.206873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.207260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.207291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.207685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.207713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.207923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.207951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.208181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.208210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.208563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.208592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.208954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.208983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.209256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.209286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.209689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.209718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.210084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.210112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.210573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.210604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.210882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.211280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.211310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.211683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.211713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.212083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.212110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.212462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.212492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.212609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.212641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.213019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.213048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.213409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.213439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.213836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.213866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.214226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.214256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.214516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.214545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.214823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.214852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.215204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.215235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.215569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.389 [2024-11-20 15:40:29.215598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.389 qpair failed and we were unable to recover it. 00:30:40.389 [2024-11-20 15:40:29.215958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.215986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.216433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.216462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.216855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.217153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.217189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.217551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.217580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.217927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.217955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.218318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.218348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.218599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.218628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.218973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.219001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.219381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.219425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.219771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.219801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.220151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.220206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.220441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.220469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.220709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.220738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.221114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.221143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.221513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.221542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.221905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.221934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.222272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.222303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.222679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.222709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.223069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.223098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.223479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.223510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.223870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.223900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.224284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.224314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.224687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.224716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.224940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.224970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.225349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.225380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.225725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.225754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.226115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.226145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.226559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.226587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.226959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.226987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.227339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.227370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.227723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.227753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.228112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.228141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.228514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.228543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.228954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.228982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.229354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.229612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.229641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.229889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.229920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.230296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.230326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.230699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.230729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.231083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.231115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.231495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.231525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.231856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.231886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.232201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.232231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.232578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.232606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.232976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.233005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.233395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.233425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.233634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.233662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.234426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.234463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.234826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.234855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.235215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.235246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.235583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.235614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.235842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.235871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.236275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.236306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.236669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.236698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.237072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.237102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.237452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.237482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.237834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.237865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.238204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.238234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.238591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.238621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.238854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.238884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.239265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.239296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.239540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.239569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.390 qpair failed and we were unable to recover it. 00:30:40.390 [2024-11-20 15:40:29.239796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.390 [2024-11-20 15:40:29.239826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.240195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.240225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.240458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.240487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.240843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.240873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.241242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.241273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.241641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.241671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.241921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.241950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.242280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.242311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.242682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.242711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.242923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.242952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.243323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.243354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.243751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.244108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.244139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.244531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.244561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.244923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.244952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.245317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.245347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.245731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.245760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.246126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.246156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.246549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.246580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.246950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.246979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.247333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.247365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.247727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.247757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.248113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.248143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.248399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.248429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.248690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.248719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.249095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.249132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.249520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.249549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.249765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.249795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.250021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.250051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.250383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.250415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.250675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.250708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.250954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.250985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.251337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.251368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.251729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.251759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.251978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.252007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.252382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.252412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.252769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.252798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.253175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.253204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.253565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.253595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.253952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.253981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.254360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.254391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.254747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.254776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.255138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.255186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.255536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.255567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.255896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.255925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.256064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.256092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.256485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.256516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.256890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.256919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.257146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.257185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.257522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.257552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.257779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.257808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.258177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.258206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.258541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.258573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.258953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.258982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.259314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.259346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.259708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.259736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.260100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.260129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.260322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.260352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.260749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.260779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.261000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.261030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.261437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.261468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.261850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.261880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.262135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.262170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.262496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.262526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.262771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.262799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.263146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.263195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.263585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.263615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.263861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.263894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.264212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.264243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.264579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.264609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.264903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.264933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.265308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.391 [2024-11-20 15:40:29.265707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.391 [2024-11-20 15:40:29.265736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.391 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.265970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.266000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.266250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.266280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.266655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.266683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.266927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.266955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.267294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.267324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.267677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.267706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.268040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.268069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.268304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.268335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.268662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.268692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.269052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.269081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.269446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.269476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.269833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.269863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.270228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.270257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.270610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.270639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.271007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.271035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.271303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.271333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.271706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.271734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.271994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.272023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.272348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.272378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.272712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.272748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.273101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.273129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.273506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.273537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.273887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.273917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.274282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.274312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.274599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.274628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.274862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.274891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.275102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.275130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.275388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.275418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.275656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.275684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.276054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.276083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.276374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.276403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.276758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.276786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.277175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.277205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.277553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.277583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.277832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.277864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.278115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.278493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.278522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.278784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.278812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.279155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.279195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.279595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.279623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.280007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.280388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.280419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.280783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.280812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.281192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.281222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.281633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.281662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.282048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.282078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.282440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.282470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.282694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.282723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.283083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.283111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.392 [2024-11-20 15:40:29.283318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.392 [2024-11-20 15:40:29.283347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.392 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.283606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.283637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.283741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.283771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.284114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.284143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.284523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.284552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.284921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.284950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.285323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.285354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.285733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.285761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.286095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.286125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.286509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.286540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.286913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.286949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.287331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.287361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.287709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.287739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.287965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.287994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.288203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.288232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.288475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.288505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.288731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.288760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.289106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.289137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.289506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.289535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.289898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.289926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.290291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.290322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.290693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.290721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.291081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.291110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.291343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.291372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.291769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.291798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.292043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.292071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.292463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.292776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.292806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.293155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.293193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.667 [2024-11-20 15:40:29.293290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.667 [2024-11-20 15:40:29.293317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.667 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.293650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.293679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.294044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.294072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.294318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.294347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.294720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.294749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.294959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.295365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.295394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.295762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.295791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.296171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.296202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.296433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.296461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.296832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.296866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.297079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.297476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.297505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.297768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.297795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.298142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.298179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.298511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.298541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.298912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.298939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.299175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.299204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.299532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.299561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.299918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.299946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.300215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.300245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.300643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.300677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.300896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.300925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.301314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.301346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.301691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.301719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.302083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.302120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.302500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.302532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.302883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.302911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.303279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.303309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.303578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.303607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.303829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.303858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.304144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.304185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.304456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.304488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.304717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.304745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.305116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.305145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.305371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.305401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.305627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.305655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.306037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.306065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.668 qpair failed and we were unable to recover it. 00:30:40.668 [2024-11-20 15:40:29.306420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.668 [2024-11-20 15:40:29.306450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.306824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.306852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.307197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.307227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.307625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.307660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.307868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.307896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.308224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.308255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.308502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.308533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.308927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.308955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.309310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.309342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.309728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.309756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.310140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.310181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.310520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.310550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.310920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.310948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.311305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.311335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.311713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.311741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.312139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.312177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.312521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.312551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.312896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.312924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.313313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.313343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.313707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.313735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.314108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.314136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.314497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.314526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.314900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.314927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.315181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.315221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.315573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.315603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.315976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.316005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.316384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.316413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.316649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.316897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.316925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.317149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.317185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.317506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.317534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.317797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.317825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.318182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.318211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.318547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.318815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.318843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.319213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.319244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.319619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.319648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.320000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.669 qpair failed and we were unable to recover it. 00:30:40.669 [2024-11-20 15:40:29.320420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.669 [2024-11-20 15:40:29.320450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.320799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.320829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.321079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.321106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.321461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.321492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.321865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.321894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.322272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.322302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.322664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.322691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.323046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.323077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.323342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.323377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.323760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.323792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.324052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.324085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.324455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.324487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.324841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.324872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.325103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.325133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.325391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.325422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.325844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.326192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.326223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.326605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.326635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.326989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.327363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.327395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.327647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.327677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.328063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.328298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.328329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.328578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.328611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.328974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.329007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.329365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.329404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.329671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.329701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.330051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.330082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.330449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.330482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.330851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.330884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.331240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.331272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.331646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.331677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.331890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.331920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.332261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.332294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.332660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.332690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.333039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.333072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.333415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.333447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.670 [2024-11-20 15:40:29.333816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.670 [2024-11-20 15:40:29.333847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.670 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.334195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.334227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.334600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.334636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.334850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.334879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.335149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.335386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.335417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.335764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.335794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.336166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.336199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.336555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.336587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.336955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.336986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.337329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.337364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.337738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.337770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.338123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.338153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.338532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.338564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.338681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.338712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.338941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.338971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.339231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.339263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.339649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.339680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.340040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.340072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.340409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.340442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.340683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.340718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.340944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.340976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.341364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.341397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.341747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.342129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.342167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.342399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.342429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.342779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.342808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.343144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.343419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.343455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.343793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.344177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.344209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.344588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.344619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.345018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.671 [2024-11-20 15:40:29.345050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.671 qpair failed and we were unable to recover it. 00:30:40.671 [2024-11-20 15:40:29.345410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.345441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.345683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.345716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.346064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.346097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.346334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.346369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.346780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.346811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.347175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.347207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.347585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.347616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.347991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.348023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.348396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.348427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.348783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.348815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.349025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.349056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.349427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.349805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.349836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.350213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.350245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.350612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.350643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.350882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.350913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.351295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.351330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.351583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.351613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.351845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.351876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.352280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.352313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.352546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.352576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.352927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.352957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.353300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.353333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.353583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.353613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.353976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.354009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.354238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.354270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.354478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.354509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.354857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.354887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.355242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.355274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.355657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.355688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.356046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.356080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.356460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.356491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.356841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.356872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.357112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.357143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.357502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.357535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.357779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.357816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.358057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.358089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.358360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.672 [2024-11-20 15:40:29.358393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.672 qpair failed and we were unable to recover it. 00:30:40.672 [2024-11-20 15:40:29.358739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.358770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.359140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.359183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.359441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.359474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.359822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.359854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.360077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.360109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.360492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.360524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.360871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.360903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.361150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.361188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.361554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.361586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.361920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.361953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.362329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.362362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.362714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.362747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.363101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.363131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.363518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.363550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.363910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.363943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.364170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.364203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.364572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.364602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.364943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.364976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.365332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.365365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.365721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.365752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.366109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.366142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.366397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.366428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.366664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.366907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.366937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.367204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.367237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.367461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.367493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.367864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.367895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.368102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.368132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.368502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.368532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.368884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.368915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.369275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.369306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.369670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.369703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.370063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.370095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.370467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.370498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.370859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.370889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.371262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.371295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.371665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.371696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.372054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.673 [2024-11-20 15:40:29.372091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.673 qpair failed and we were unable to recover it. 00:30:40.673 [2024-11-20 15:40:29.372457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.372489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.372840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.372872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.373110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.373142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.373566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.373597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.373813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.373845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.374214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.374245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.374609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.374640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.374884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.374915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.375291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.375326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.375539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.375570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.375930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.375960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.376185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.376216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.376536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.376569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.376920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.376951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.377333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.377366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.377590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.377620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.377845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.377875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.378281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.378313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.378660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.378691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.378903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.378933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.379299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.379693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.379725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.380076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.380107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.380368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.380401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.380652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.380682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.381030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.381061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.381419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.381451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.381659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.381689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.382010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.382041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.382411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.382442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.382802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.382833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.383024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.383053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.383419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.383451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.383677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.383707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.384071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.384102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.384449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.384481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.384735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.384765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.385014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.674 [2024-11-20 15:40:29.385044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.674 qpair failed and we were unable to recover it. 00:30:40.674 [2024-11-20 15:40:29.385415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.385448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.385657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.385694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.385931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.385960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.386285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.386318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.386665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.386696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.387050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.387080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.387338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.387615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.387648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.387886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.387917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.388280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.388607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.388639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.389008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.389041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.389380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.389411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.389778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.389809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.390190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.390224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.390612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.390643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.390759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.390792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.390897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.390926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.391200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.391234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.391596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.391626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.391976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.392008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.392346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.392379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.392743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.392775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.393135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.393174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.393532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.393926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.393956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.394315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.394348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.394568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.394599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.394963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.394995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.395260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.395295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.395651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.395683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.395911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.395942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.396212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.396244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.396477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.396509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.396889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.396919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.397074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.397106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.397351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.397383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.397750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.675 [2024-11-20 15:40:29.397786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.675 qpair failed and we were unable to recover it. 00:30:40.675 [2024-11-20 15:40:29.398175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.398207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.398582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.398613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.398825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.398855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.399126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.399177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.399553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.399584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.399946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.399978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.400232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.400264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.400519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.400549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.400941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.400974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.401302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.401334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.401713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.401743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.402105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.402137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.402259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.402292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.402545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.402577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.402791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.402821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.403202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.403234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.403608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.403638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.403853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.403883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.404201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.404234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.404460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.404490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.404697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.404728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.405087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.405117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.405370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.405404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.405633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.405663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.406016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.406048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.406277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.406311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.406720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.406753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.406968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.407000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.407342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.407374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.407733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.407765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.408129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.408169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.408532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.408563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.408826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.408855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.676 [2024-11-20 15:40:29.409107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.676 [2024-11-20 15:40:29.409140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.676 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.409486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.409517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.409873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.409905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.410127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.410166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.410550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.410581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.410938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.410971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.411317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.411349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.411632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.411664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.411924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.411955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.412304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.412339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.412754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.412791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.413148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.413186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.413510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.413541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.413776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.413806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.414143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.414181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.414530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.414561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.414974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.415004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.415378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.415410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.415776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.415808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.416016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.416048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.416414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.416447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.416796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.416827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.417058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.417088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.417331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.417362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.417730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.417763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.418008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.418039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.418370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.418402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.418761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.418792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.419127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.419167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.419416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.419446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.419849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.419882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.420223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.420256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.420579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.420610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.420854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.420883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.421277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.421309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.421552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.421583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.421929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.421960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.422224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.422256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.677 [2024-11-20 15:40:29.422631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.677 [2024-11-20 15:40:29.422661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.677 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.423024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.423053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.423411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.423443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.423799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.423829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.424197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.424230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.424604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.424636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.425005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.425037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.425455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.425489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.425697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.425727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.426094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.426126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.426485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.426519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.426888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.426918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.427232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.427269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.427522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.427554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.427815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.427845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.428051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.428081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.428475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.428716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.428747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.429108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.429139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.429495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.429528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.429793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.429825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.430205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.430238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.430610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.430639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.431020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.431052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.431331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.431364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.431727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.431758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.431986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.432475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.432509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.432878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.432911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.433265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.433295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.433625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.434007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.434039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.434424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.434456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.434815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.434847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.435216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.435249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.435640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.435675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.436053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.436084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.436468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.436500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-11-20 15:40:29.436741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.678 [2024-11-20 15:40:29.436773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.437171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.437203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.437572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.437605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.437983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.438014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.438377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.438412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.438649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.438679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.439048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.439079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.439442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.439475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.439848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.439886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.440259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.440293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.440509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.440538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.440695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.440725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.440963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.440993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.441272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.441306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.441536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.441566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.441921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.441952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.442187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.442219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.442484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.442516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.442885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.442917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.443152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.443206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.443592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.443625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.444006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.444040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.444291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.444324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.444580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.444610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.444861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.444892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.445272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.445303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.445682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.446058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.446090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.446464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.446498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.446866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.446903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.447302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.447333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.447728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.447758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.448149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.448193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.448424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.448454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.448714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.448747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.449101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.449133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.449418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.449450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.449858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.449888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-11-20 15:40:29.450259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.679 [2024-11-20 15:40:29.450290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.450661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.450690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.450920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.450949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.451211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.451247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.451660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.451688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.451976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.452004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.452389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.452419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.452783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.452811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.453204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.453234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.453493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.453521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.453886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.453916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.454304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.454337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.454704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.454735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.454942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.454974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.455202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.455233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.455597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.455630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.455974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.456008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.456398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.456431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.456771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.457131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.457168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.457414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.457446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.457679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.457711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.457829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.457860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.458214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.458583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.458615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.458867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.458901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.459271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.459305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.459652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.459684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.459949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.459985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.460381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.460414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.460793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.460825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.461063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.461097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.461497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.461529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.461840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.461873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.462235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.680 [2024-11-20 15:40:29.462268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-11-20 15:40:29.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.462690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.463050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.463082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.463461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.463494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.463843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.464237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.464270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.464647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.464678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.464891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.464922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.465279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.465312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.465420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.465458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.465814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.465846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.466215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.466250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.466662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.466695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.466908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.466941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.467150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.467191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.467511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.467542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.467893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.467924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.468281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.468314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.468527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.468558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.468942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.468973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.469326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.469360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.469720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.469751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.469964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.469996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.470396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.470430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.470830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.471066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.471100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.471478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.471512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.471856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.471889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.472104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.472135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.472419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.472452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.472805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.472836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.473217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.473249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.473624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.473657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.474013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.474044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.474276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.474309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.474681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.474714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.475081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.475114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.475527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.681 [2024-11-20 15:40:29.475837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-20 15:40:29.475870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.681 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.476080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.476111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.476478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.476510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.476865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.476898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.477284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.477316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.477673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.477706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.478058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.478091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.478470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.478504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.478872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.478904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.479129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.479169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.479386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.479416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.479698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.479735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.480078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.480111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.480514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.480547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.480897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.480931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.481297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.481329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.481601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.481631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.481879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.481909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.482268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.482303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.482701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.482733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.482832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.482862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.483196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.483229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.483604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.483975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.484008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.484294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.484714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.484745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.485090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.485122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.485532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.485892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.485924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.486276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.486308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.486680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.486711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.487062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.487095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.487465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.487497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.487860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.487892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.488251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.488283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.488607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.488637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.489019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.682 [2024-11-20 15:40:29.489395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-20 15:40:29.489428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.682 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.489681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.489711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.490096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.490447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.490480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.490761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.490792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.491150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.491188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.491405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.491437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.491794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.491825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.492191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.492224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.492573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.492605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.492977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.493008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.493237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.493267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.493615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.493646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.494020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.494052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.494300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.494337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.494693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.494724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.494946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.494976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.495384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.495416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.495658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.495688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.495915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.495946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.496200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.496230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.496608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.496639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.496991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.497022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.497399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.497430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.497778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.497810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.498185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.498217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.498466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.498499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.498818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.498849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.499104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.499135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.499517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.499547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.499763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.499792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.500138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.500175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.500387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.500417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.500667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.500698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.501068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.501099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.501326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.501357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.501701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.501731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.502081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.502111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.683 [2024-11-20 15:40:29.502386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.683 [2024-11-20 15:40:29.502419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.683 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.502807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.502837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.503063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.503093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.503470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.503503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.503838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.504227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.504258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.504630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.504660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.505035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.505067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.505454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.505485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.505600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.505631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.506009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.506040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.506421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.506452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.506810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.506841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.507062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.507091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.507460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.507492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.507837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.507869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.508216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.508254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.508480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.508511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.508858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.508889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.509238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.509269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.509638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.509668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.509872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.509901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.510255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.510286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.510541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.510571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.510950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.510982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.511336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.511368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.511653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.511683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.511897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.511928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.512311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.512343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.512611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.512640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.512992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.513022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.513394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.513426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.513756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.513786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.514149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.514189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.514548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.514579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.514786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.514815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.515171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.515478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.515509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.684 qpair failed and we were unable to recover it. 00:30:40.684 [2024-11-20 15:40:29.515872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.684 [2024-11-20 15:40:29.515902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.516275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.516307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.516683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.516714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.517078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.517110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.517470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.517501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.517861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.517891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.518260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.518291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.518566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.518913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.518943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.519280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.519312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.519665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.519695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.520059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.520089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.520331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.520366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.520676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.520708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.520920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.520951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.521181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.521213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.521558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.521984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.522345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.522384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.522727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.522759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.523141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.523180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.523427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.523457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.523801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.523831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.524215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.524248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.524609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.524640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.524988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.525018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.525399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.525430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.525687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.525717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.526070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.526100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.526493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.526525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.526882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.526912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.527283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.527314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 [2024-11-20 15:40:29.527548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.685 [2024-11-20 15:40:29.527579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.685 qpair failed and we were unable to recover it. 00:30:40.685 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.685 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:40.685 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.685 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.685 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.686 [2024-11-20 15:40:29.530792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.530874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.531252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.531292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.531687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.531718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.531925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.531954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.532345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.532376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.532742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.532772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.533113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.533143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.533521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.533552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.533932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.533962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.534196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.534227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.534612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.534644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.535025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.535056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.535414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.535446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.535677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.535706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.536061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.536093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.536470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.536501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.536859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.536888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.537237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.537269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.537634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.537664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.537923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.537959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.538313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.538345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.538700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.538730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.539102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.539132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.539532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.539571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.539944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.539973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.540356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.540387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.540763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.540793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.541168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.541198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.541557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.541588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.541855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.541884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.542281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.542678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.542707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.543056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.543087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.543469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.543501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.543857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.543887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.544129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.544189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.544539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.544570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.686 [2024-11-20 15:40:29.545007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.686 [2024-11-20 15:40:29.545037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.686 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.545291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.545321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.545668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.545698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.546076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.546105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.546510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.546540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.546927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.546958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.547340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.547369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.547721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.547749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.548004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.548033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.548383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.548414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.548778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.548807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.549141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.549182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.549549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.549578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.549887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.549925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.550174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.550208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.550444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.550473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.550813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.550844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.551078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.551107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.551388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.551418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.551806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.551838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.552065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.552093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.552469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.552500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.552746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.552775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.553126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.553155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.553519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.553548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.553903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.553932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.554292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.554390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.554741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.554770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.554956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.554984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.555382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.555413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.555798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.555827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.556208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.556240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.556586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.556616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.556985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.557014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.557373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.557404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.687 [2024-11-20 15:40:29.557741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.687 [2024-11-20 15:40:29.557770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.687 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.558042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.558070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.558309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.558342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.558688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.558720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.559146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.559185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.559543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.559573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.559829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.559862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.560226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.560258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.560630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.560658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.560914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.560942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.561316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.561348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.561705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.562086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.562114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.562497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.562528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.562882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.562912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.563278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.563309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.563679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.563708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.564081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.564110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.564510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.564543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.564902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.564932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.565201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.565231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.565609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.565638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.565982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.566012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.566362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.566393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.566613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.566642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.566963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.566996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.567404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.567434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.567646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.567674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.568057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.568086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.568319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.568350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.688 [2024-11-20 15:40:29.568751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.688 [2024-11-20 15:40:29.568780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.688 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.569141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.569188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.569545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.569575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.569691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.569726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.569951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.569980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.570297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.570328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.570719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.570748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.571119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.571150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.571545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.571574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.571879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.571917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.572130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.572175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.689 [2024-11-20 15:40:29.572565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.572596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:40.689 [2024-11-20 15:40:29.572817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.572846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.689 [2024-11-20 15:40:29.573209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.573244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.689 [2024-11-20 15:40:29.573471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.573502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.573717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.573745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.574108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.574139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.574479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.574509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.574842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.574871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.575253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.575283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.575631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.575660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.575870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.575899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.576259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.576289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.576647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.576675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.577023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.577052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.577271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.577300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.577670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.577699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.578057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.578085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.578298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.578328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.578693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.578728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.578983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.579011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.579257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.579639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.579668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.580029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.580057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.580329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.580359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.580740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.580768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.689 [2024-11-20 15:40:29.581130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.689 [2024-11-20 15:40:29.581167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.689 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.581550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.581581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.581936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.581964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.582317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.582353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.582717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.582745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.583106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.583136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.583495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.583525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.583892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.583921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.584276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.584306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.584698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.584727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.585084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.585112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.585384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.585414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.585661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.585689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.586039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.586068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.586416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.586445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.586813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.586842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.587079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.587107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.587514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.587544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.587908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.587937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.588284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.588314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.588739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.588767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.589000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.589029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.589389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.589419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.589781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.590190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.590219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.590439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.590467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.590801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.590837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.591062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.591094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.591424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.591453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.591827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.591855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.592238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.592404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.592436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.592792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.592821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.690 [2024-11-20 15:40:29.593017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.690 [2024-11-20 15:40:29.593047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.690 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.593411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.593442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.593783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.593813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.594212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.594242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.594595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.594625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.595003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.595031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.595392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.595422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.595784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.595812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.596200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.596232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.596455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.596486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.596840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.596876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.597238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.597268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.597638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.597666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.597910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.597939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.598260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.598289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.598658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.598686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.599030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.599059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.599407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.599436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.599806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.599834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.600195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.600224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.600601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.600630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.600987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.601017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.601415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.601445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.601660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.601974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.602003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.602216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.602245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.602587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.602615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.602966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.602995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.603331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.603361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.603737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.603766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.603996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.604024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.604292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.604322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.604591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.604620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.605010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.605038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.605256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.605648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.605678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.606030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.606060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.606486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.606517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.691 [2024-11-20 15:40:29.606741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.691 [2024-11-20 15:40:29.606770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.691 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.607090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.607119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.607473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.607503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.607907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.607936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.608292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.608321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.608693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.608722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.608944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.608972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.609335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.609365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.609727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.609755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.610115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.610144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.610546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.610575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.610832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.610860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.611237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.611280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.611620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.611650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.612037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.612067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.692 [2024-11-20 15:40:29.612392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.692 [2024-11-20 15:40:29.612424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.692 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.612817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.612847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.613105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.613134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.613515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.613545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.613887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.613915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.614282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.614312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.614703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.614731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.615092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.615121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.615471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.615501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 Malloc0 00:30:40.958 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.958 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:40.958 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.958 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.958 [2024-11-20 15:40:29.617624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.617683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.617943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.617977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.618373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.618405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.618756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.618786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.619171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.619201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.619418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.619446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.619809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.619837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.620196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.620227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.620588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.620616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.620835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.620864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.621197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.621227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.621615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.621765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.621792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.622179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.622218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.622548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.622579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.622725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.622753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.623142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.623182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.623312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.958 [2024-11-20 15:40:29.623406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.623434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.623775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.623803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.624072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.624106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.624475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.624509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.624859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.624889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.625147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.625188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.958 [2024-11-20 15:40:29.625550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.958 [2024-11-20 15:40:29.625583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.958 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.625930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.625961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.626200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.626232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.626496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.626536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.626914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.626945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.627175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.627206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.627583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.627614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.627836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.627867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.628244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.628276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.628629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.628659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.629012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.629044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.629401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.629433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.629665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.629694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.630059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.630090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.630327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.630360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.630730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.631139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.631200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.631503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.631535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.631898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.631929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.632178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.632213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.959 [2024-11-20 15:40:29.632597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.632629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.632845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.959 [2024-11-20 15:40:29.632875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.959 [2024-11-20 15:40:29.633249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.633282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.959 [2024-11-20 15:40:29.633626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.633658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.633764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.633795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.634145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.634186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.634553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.634584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.634964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.634995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.635377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.635412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.635789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.635819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.636191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.636225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.636599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.636629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.636992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.637024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.637385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.637418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.637633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.637662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.638038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.959 [2024-11-20 15:40:29.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.959 qpair failed and we were unable to recover it. 00:30:40.959 [2024-11-20 15:40:29.638339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.638370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.638717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.638748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.639124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.639155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.639553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.639584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.639932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.639964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.640319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.640358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.640726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.640756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.640988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.641018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.641364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.641396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.641746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.641776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.642131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.642170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.642408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.642438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.642782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.642812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.643173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.643206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.643487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.643518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.643650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.643845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.643874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.644220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.644252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.960 [2024-11-20 15:40:29.644619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.644658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.960 [2024-11-20 15:40:29.645010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.645041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.960 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.960 [2024-11-20 15:40:29.645425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.645457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.645892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.645924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.646269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.646301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.646669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.646700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.647070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.647101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.647538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.647571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.647786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.647818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.648142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.648185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.648452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.648484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.648894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.648925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.649319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.649359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.649663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.649694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.650082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.650395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.650427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.650623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.650652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.650937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.650966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.960 qpair failed and we were unable to recover it. 00:30:40.960 [2024-11-20 15:40:29.651321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.960 [2024-11-20 15:40:29.651353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.651718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.651749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.652116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.652147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.652582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.652612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.652983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.653015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.653371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.653404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.653750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.653782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.654155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.654195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.654436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.654466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.654821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.654852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.655227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.655259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.655625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.655656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.655921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.655951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.656314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.656345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.961 [2024-11-20 15:40:29.656721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.656753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.961 [2024-11-20 15:40:29.657074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.657105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.961 [2024-11-20 15:40:29.657511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.657550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.961 [2024-11-20 15:40:29.657815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.657848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.658236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.658270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.658537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.658569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.658927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.658958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.659289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.659322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.659621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.659652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.659888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.659919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.660267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.660298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.660677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.660708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.660984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.661014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.661465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.661498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.661794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.661824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.662187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.662218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.662563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.662592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.663004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.663034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.663436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.961 [2024-11-20 15:40:29.663474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8e88000b90 with addr=10.0.0.2, port=4420 00:30:40.961 qpair failed and we were unable to recover it. 00:30:40.961 [2024-11-20 15:40:29.663729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.961 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.961 [2024-11-20 15:40:29.674670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.961 [2024-11-20 15:40:29.674797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.961 [2024-11-20 15:40:29.674845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.961 [2024-11-20 15:40:29.674866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.674883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.674932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.962 15:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 793010 00:30:40.962 [2024-11-20 15:40:29.684508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.684598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.684626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.684641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.684654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.684684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.694514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.694589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.694609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.694618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.694627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.694648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.704499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.704586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.704605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.704612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.704619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.704639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.714492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.714568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.714586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.714594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.714601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.714619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.724469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.724536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.724553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.724561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.724567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.724585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.734487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.734554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.734572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.734580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.734587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.734605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.744463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.744530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.744547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.744560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.744568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.744587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.754585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.754655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.754677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.754688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.754695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.754716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.764534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.764594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.764613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.764621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.764628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.764647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.774585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.962 [2024-11-20 15:40:29.774652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.962 [2024-11-20 15:40:29.774669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.962 [2024-11-20 15:40:29.774677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.962 [2024-11-20 15:40:29.774684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.962 [2024-11-20 15:40:29.774702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.962 qpair failed and we were unable to recover it. 00:30:40.962 [2024-11-20 15:40:29.784616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.784728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.784745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.784753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.784760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.784784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.794672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.794737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.794754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.794762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.794769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.794787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.804699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.804770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.804788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.804797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.804803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.804821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.814704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.814784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.814803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.814811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.814818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.814835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.824753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.824824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.824841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.824849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.824857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.824874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.834784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.834860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.834898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.834907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.834915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.834941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.844815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.844886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.844922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.844932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.844940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.844968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.854878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.854980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.855019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.855028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.855036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.855062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.864970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.865054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.865074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.865083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.865090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.865109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.874991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.875073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.875099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.875107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.875115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.875134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.884952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.885015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.885033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.885041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.885048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.885067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.895021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.895085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.895103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.895111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.895118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.895137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:40.963 [2024-11-20 15:40:29.904992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.963 [2024-11-20 15:40:29.905069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.963 [2024-11-20 15:40:29.905089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.963 [2024-11-20 15:40:29.905102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.963 [2024-11-20 15:40:29.905114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:40.963 [2024-11-20 15:40:29.905137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.963 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.915064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.915138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.915157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.915173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.915188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.915207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.924949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.925028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.925052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.925060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.925067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.925088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.935085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.935166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.935187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.935195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.935205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.935225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.945147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.945233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.945252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.945260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.945267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.945287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.955181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.955248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.955265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.955272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.955280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.955299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.965069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.965134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.965151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.965165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.965173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.965191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.975209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.975278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.975296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.975304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.975310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.975328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.985239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.985304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.985321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.985329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.985336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.985353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:29.995305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:29.995383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:29.995401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:29.995409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:29.995417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:29.995436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:30.005330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:30.005399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:30.005427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:30.005435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:30.005442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:30.005462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:30.015261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:30.015324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:30.015343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:30.015352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:30.015359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:30.015377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.227 qpair failed and we were unable to recover it. 00:30:41.227 [2024-11-20 15:40:30.025398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.227 [2024-11-20 15:40:30.025472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.227 [2024-11-20 15:40:30.025490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.227 [2024-11-20 15:40:30.025498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.227 [2024-11-20 15:40:30.025505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.227 [2024-11-20 15:40:30.025524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.035471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.035555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.035573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.035581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.035588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.035608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.045472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.045539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.045558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.045566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.045580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.045599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.055636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.055730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.055748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.055758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.055765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.055783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.065446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.065557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.065580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.065588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.065596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.065617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.075576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.075696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.075717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.075725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.075732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.075751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.085575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.085648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.085667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.085676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.085683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.085703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.095595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.095656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.095675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.095683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.095690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.095709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.105639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.105710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.105728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.105736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.105743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.105761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.115725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.115793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.115811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.115819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.115826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.115845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.125688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.125780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.125799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.125806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.125813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.125833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.135768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.135850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.135875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.135883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.135892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.135911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.145784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.145905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.145924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.145934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.145942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.145960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.228 [2024-11-20 15:40:30.155849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.228 [2024-11-20 15:40:30.155925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.228 [2024-11-20 15:40:30.155943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.228 [2024-11-20 15:40:30.155952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.228 [2024-11-20 15:40:30.155960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.228 [2024-11-20 15:40:30.155980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.228 qpair failed and we were unable to recover it. 00:30:41.229 [2024-11-20 15:40:30.165829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.229 [2024-11-20 15:40:30.165903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.229 [2024-11-20 15:40:30.165921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.229 [2024-11-20 15:40:30.165930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.229 [2024-11-20 15:40:30.165938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.229 [2024-11-20 15:40:30.165958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.229 qpair failed and we were unable to recover it. 00:30:41.229 [2024-11-20 15:40:30.175864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.229 [2024-11-20 15:40:30.175936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.229 [2024-11-20 15:40:30.175955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.229 [2024-11-20 15:40:30.175968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.229 [2024-11-20 15:40:30.175976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.229 [2024-11-20 15:40:30.175995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.229 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.185887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.185953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.185971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.185979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.185986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.186005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.195945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.196056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.196075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.196085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.196092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.196111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.205974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.206042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.206060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.206068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.206075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.206093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.216006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.216081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.216098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.216106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.216113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.216139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.226031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.226099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.226117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.226125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.226132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.226150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.236095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.236182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.236199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.236208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.236215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.236236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.246089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.246186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.246204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.246214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.246221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.246240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.256090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.256163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.256180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.256188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.256195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.256214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.266153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.266233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.266252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.492 [2024-11-20 15:40:30.266260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.492 [2024-11-20 15:40:30.266267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.492 [2024-11-20 15:40:30.266285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.492 qpair failed and we were unable to recover it. 00:30:41.492 [2024-11-20 15:40:30.276222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.492 [2024-11-20 15:40:30.276297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.492 [2024-11-20 15:40:30.276314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.276322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.276329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.276348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.286202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.286265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.286283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.286293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.286300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.286320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.296221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.296290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.296308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.296316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.296323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.296342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.306253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.306321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.306339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.306352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.306360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.306378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.316336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.316411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.316430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.316438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.316445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.316465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.326316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.326381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.326398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.326406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.326413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.326431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.336321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.336418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.336437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.336445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.336454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.336473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.346378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.346451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.346469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.346478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.346485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.346510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.356439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.356515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.356532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.356539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.356546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.356565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.366415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.366513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.366531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.366539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.366546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.366564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.376379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.376454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.376476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.376484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.376493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.376512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.386500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.386584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.386603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.386612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.386620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.386638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.396606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.396680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.396699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.493 [2024-11-20 15:40:30.396707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.493 [2024-11-20 15:40:30.396715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.493 [2024-11-20 15:40:30.396733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.493 qpair failed and we were unable to recover it. 00:30:41.493 [2024-11-20 15:40:30.406517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.493 [2024-11-20 15:40:30.406578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.493 [2024-11-20 15:40:30.406596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.494 [2024-11-20 15:40:30.406605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.494 [2024-11-20 15:40:30.406613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.494 [2024-11-20 15:40:30.406631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.494 qpair failed and we were unable to recover it. 00:30:41.494 [2024-11-20 15:40:30.416614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.494 [2024-11-20 15:40:30.416676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.494 [2024-11-20 15:40:30.416694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.494 [2024-11-20 15:40:30.416702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.494 [2024-11-20 15:40:30.416709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.494 [2024-11-20 15:40:30.416728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.494 qpair failed and we were unable to recover it. 00:30:41.494 [2024-11-20 15:40:30.426606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.494 [2024-11-20 15:40:30.426709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.494 [2024-11-20 15:40:30.426728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.494 [2024-11-20 15:40:30.426736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.494 [2024-11-20 15:40:30.426743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.494 [2024-11-20 15:40:30.426763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.494 qpair failed and we were unable to recover it. 00:30:41.494 [2024-11-20 15:40:30.436681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.494 [2024-11-20 15:40:30.436769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.494 [2024-11-20 15:40:30.436793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.494 [2024-11-20 15:40:30.436801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.494 [2024-11-20 15:40:30.436809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.494 [2024-11-20 15:40:30.436828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.494 qpair failed and we were unable to recover it. 00:30:41.494 [2024-11-20 15:40:30.446662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.494 [2024-11-20 15:40:30.446731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.494 [2024-11-20 15:40:30.446749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.494 [2024-11-20 15:40:30.446757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.494 [2024-11-20 15:40:30.446765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.494 [2024-11-20 15:40:30.446783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.494 qpair failed and we were unable to recover it. 00:30:41.757 [2024-11-20 15:40:30.456585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.757 [2024-11-20 15:40:30.456656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.757 [2024-11-20 15:40:30.456674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.757 [2024-11-20 15:40:30.456682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.757 [2024-11-20 15:40:30.456690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.757 [2024-11-20 15:40:30.456708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.757 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.466711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.466783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.466805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.466815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.466825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.466845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.476829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.476898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.476917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.476925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.476938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.476957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.486838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.486901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.486919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.486927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.486935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.486953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.496840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.496905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.496923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.496931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.496938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.496956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.506870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.506936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.506954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.506962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.506969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.506987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.516930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.517002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.517019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.517027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.517034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.517053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.526805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.526867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.526890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.526898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.526906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.526926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.536952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.537018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.537037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.537045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.537053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.537071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.546982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.547051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.547068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.547077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.547086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.547105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.557032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.557111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.557129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.557137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.557144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.557172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.567012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.567127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.567152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.567165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.567173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.567192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.577061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.577136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.577154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.577170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.577177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.758 [2024-11-20 15:40:30.577195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.758 qpair failed and we were unable to recover it. 00:30:41.758 [2024-11-20 15:40:30.587097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.758 [2024-11-20 15:40:30.587200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.758 [2024-11-20 15:40:30.587219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.758 [2024-11-20 15:40:30.587228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.758 [2024-11-20 15:40:30.587236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.587253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.597123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.597242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.597261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.597269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.597277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.597295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.607142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.607212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.607229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.607237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.607249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.607268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.617208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.617279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.617297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.617305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.617312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.617330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.627241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.627319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.627337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.627345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.627352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.627369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.637343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.637420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.637438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.637447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.637454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.637472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.647252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.647354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.647372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.647380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.647387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.647407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.657316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.657382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.657400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.657408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.657415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.657433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.667319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.667388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.667405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.667413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.667420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.667437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.677338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.677416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.677433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.677441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.677448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.677466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.687417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.687480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.687498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.687505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.687513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.687532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.697450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.697513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.697536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.697544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.697551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.697569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:41.759 [2024-11-20 15:40:30.707490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.759 [2024-11-20 15:40:30.707564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.759 [2024-11-20 15:40:30.707581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.759 [2024-11-20 15:40:30.707589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.759 [2024-11-20 15:40:30.707597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:41.759 [2024-11-20 15:40:30.707615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:41.759 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.717563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.717670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.717688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.717697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.022 [2024-11-20 15:40:30.717704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.022 [2024-11-20 15:40:30.717722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.022 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.727522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.727586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.727604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.727612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.022 [2024-11-20 15:40:30.727620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.022 [2024-11-20 15:40:30.727638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.022 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.737550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.737625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.737644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.737657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.022 [2024-11-20 15:40:30.737665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.022 [2024-11-20 15:40:30.737683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.022 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.747622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.747693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.747710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.747717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.022 [2024-11-20 15:40:30.747725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.022 [2024-11-20 15:40:30.747742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.022 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.757645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.757765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.757784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.757792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.022 [2024-11-20 15:40:30.757799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.022 [2024-11-20 15:40:30.757818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.022 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.767667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.767775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.767795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.767802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.022 [2024-11-20 15:40:30.767810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.022 [2024-11-20 15:40:30.767828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.022 qpair failed and we were unable to recover it. 00:30:42.022 [2024-11-20 15:40:30.777686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.022 [2024-11-20 15:40:30.777757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.022 [2024-11-20 15:40:30.777774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.022 [2024-11-20 15:40:30.777782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.777789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.777812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.787698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.787766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.787785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.787793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.787800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.787818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.797767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.797837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.797855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.797863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.797870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.797888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.807743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.807837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.807855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.807865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.807872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.807889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.817747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.817817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.817854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.817866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.817874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.817900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.827821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.827904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.827942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.827954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.827961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.827989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.837880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.837957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.837994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.838006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.838014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.838040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.847878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.847945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.847966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.847975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.847983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.848004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.857893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.857957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.857976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.857985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.857993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.858012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.867926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.868007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.868025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.868040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.868050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.868069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.878004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.878071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.878089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.878097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.878105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.878122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.888008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.888073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.888091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.888099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.888106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.888123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.897943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.898005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.898028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.898037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.898043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.023 [2024-11-20 15:40:30.898063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.023 qpair failed and we were unable to recover it. 00:30:42.023 [2024-11-20 15:40:30.908058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.023 [2024-11-20 15:40:30.908138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.023 [2024-11-20 15:40:30.908162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.023 [2024-11-20 15:40:30.908172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.023 [2024-11-20 15:40:30.908180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.908208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.918129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.918200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.918219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.918227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.918234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.918252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.928139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.928214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.928232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.928240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.928247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.928267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.938194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.938263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.938282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.938290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.938298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.938316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.948228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.948297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.948314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.948321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.948328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.948346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.958250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.958318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.958338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.958346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.958354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.958372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.968266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.968325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.968344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.968352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.968359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.968377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.024 [2024-11-20 15:40:30.978299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.024 [2024-11-20 15:40:30.978378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.024 [2024-11-20 15:40:30.978396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.024 [2024-11-20 15:40:30.978404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.024 [2024-11-20 15:40:30.978413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.024 [2024-11-20 15:40:30.978431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.024 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:30.988325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:30.988399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:30.988416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:30.988424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:30.988431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:30.988449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:30.998417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:30.998485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:30.998509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:30.998517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:30.998524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:30.998542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:31.008363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:31.008425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:31.008443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:31.008451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:31.008458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:31.008476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:31.018457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:31.018525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:31.018544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:31.018551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:31.018558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:31.018576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:31.028439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:31.028506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:31.028523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:31.028531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:31.028538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:31.028556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:31.038522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:31.038599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:31.038616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:31.038625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:31.038638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:31.038656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:31.048495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.286 [2024-11-20 15:40:31.048557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.286 [2024-11-20 15:40:31.048575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.286 [2024-11-20 15:40:31.048583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.286 [2024-11-20 15:40:31.048590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.286 [2024-11-20 15:40:31.048607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.286 qpair failed and we were unable to recover it. 00:30:42.286 [2024-11-20 15:40:31.058566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.058633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.058650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.058658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.058665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.058682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.068606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.068671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.068687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.068695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.068703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.068720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.078674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.078739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.078756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.078764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.078771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.078789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.088681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.088761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.088778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.088786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.088794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.088814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.098687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.098754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.098771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.098779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.098785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.098803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.108735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.108810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.108828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.108837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.108844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.108861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.118728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.118792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.118810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.118818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.118825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.118842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.128786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.128852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.128874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.128882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.128890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.128907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.138814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.138878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.138895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.138904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.138911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.138929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.148826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.148931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.148949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.148957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.148966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.148983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.158786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.158858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.158875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.158883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.158890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.158907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.168921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.168992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.169009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.169017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.169031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.169049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.287 [2024-11-20 15:40:31.178954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.287 [2024-11-20 15:40:31.179053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.287 [2024-11-20 15:40:31.179072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.287 [2024-11-20 15:40:31.179080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.287 [2024-11-20 15:40:31.179088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.287 [2024-11-20 15:40:31.179106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.287 qpair failed and we were unable to recover it. 00:30:42.288 [2024-11-20 15:40:31.188967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.288 [2024-11-20 15:40:31.189035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.288 [2024-11-20 15:40:31.189053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.288 [2024-11-20 15:40:31.189061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.288 [2024-11-20 15:40:31.189068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.288 [2024-11-20 15:40:31.189086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.288 qpair failed and we were unable to recover it. 00:30:42.288 [2024-11-20 15:40:31.199026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.288 [2024-11-20 15:40:31.199095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.288 [2024-11-20 15:40:31.199113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.288 [2024-11-20 15:40:31.199120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.288 [2024-11-20 15:40:31.199127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.288 [2024-11-20 15:40:31.199145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.288 qpair failed and we were unable to recover it. 00:30:42.288 [2024-11-20 15:40:31.209022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.288 [2024-11-20 15:40:31.209091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.288 [2024-11-20 15:40:31.209109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.288 [2024-11-20 15:40:31.209117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.288 [2024-11-20 15:40:31.209124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.288 [2024-11-20 15:40:31.209141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.288 qpair failed and we were unable to recover it. 00:30:42.288 [2024-11-20 15:40:31.219047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.288 [2024-11-20 15:40:31.219115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.288 [2024-11-20 15:40:31.219133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.288 [2024-11-20 15:40:31.219140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.288 [2024-11-20 15:40:31.219147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.288 [2024-11-20 15:40:31.219169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.288 qpair failed and we were unable to recover it. 00:30:42.288 [2024-11-20 15:40:31.229091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.288 [2024-11-20 15:40:31.229165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.288 [2024-11-20 15:40:31.229182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.288 [2024-11-20 15:40:31.229190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.288 [2024-11-20 15:40:31.229197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.288 [2024-11-20 15:40:31.229216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.288 qpair failed and we were unable to recover it. 00:30:42.288 [2024-11-20 15:40:31.239176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.288 [2024-11-20 15:40:31.239250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.288 [2024-11-20 15:40:31.239269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.288 [2024-11-20 15:40:31.239278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.288 [2024-11-20 15:40:31.239285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.288 [2024-11-20 15:40:31.239303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.288 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.249208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.249313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.249330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.249338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.550 [2024-11-20 15:40:31.249345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.550 [2024-11-20 15:40:31.249363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.550 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.259177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.259241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.259264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.259272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.550 [2024-11-20 15:40:31.259279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.550 [2024-11-20 15:40:31.259297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.550 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.269226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.269291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.269308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.269316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.550 [2024-11-20 15:40:31.269323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.550 [2024-11-20 15:40:31.269341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.550 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.279278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.279395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.279413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.279422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.550 [2024-11-20 15:40:31.279429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.550 [2024-11-20 15:40:31.279447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.550 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.289271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.289334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.289351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.289360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.550 [2024-11-20 15:40:31.289367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.550 [2024-11-20 15:40:31.289385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.550 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.299228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.299287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.299305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.299319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.550 [2024-11-20 15:40:31.299326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.550 [2024-11-20 15:40:31.299344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.550 qpair failed and we were unable to recover it. 00:30:42.550 [2024-11-20 15:40:31.309330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.550 [2024-11-20 15:40:31.309399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.550 [2024-11-20 15:40:31.309417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.550 [2024-11-20 15:40:31.309425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.309432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.309450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.319365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.319444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.319461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.319469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.319476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.319494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.329422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.329507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.329524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.329532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.329541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.329560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.339466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.339584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.339602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.339611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.339618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.339642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.349434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.349557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.349593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.349602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.349609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.349635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.359445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.359512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.359532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.359540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.359548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.359566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.369571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.369632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.369650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.369657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.369665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.369682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.379522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.379606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.379623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.379631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.379640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.379657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.389568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.389640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.389658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.389666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.389673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.389691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.399656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.399733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.399752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.399760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.399768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.399789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.409657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.409722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.409741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.409749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.409757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.409775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.419640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.419703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.419721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.419729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.419737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.419755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.429727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.429799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.429817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.429835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.429844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.551 [2024-11-20 15:40:31.429863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.551 qpair failed and we were unable to recover it. 00:30:42.551 [2024-11-20 15:40:31.439720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.551 [2024-11-20 15:40:31.439796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.551 [2024-11-20 15:40:31.439817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.551 [2024-11-20 15:40:31.439825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.551 [2024-11-20 15:40:31.439833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.439853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.552 [2024-11-20 15:40:31.449741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.552 [2024-11-20 15:40:31.449812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.552 [2024-11-20 15:40:31.449831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.552 [2024-11-20 15:40:31.449839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.552 [2024-11-20 15:40:31.449846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.449866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.552 [2024-11-20 15:40:31.459791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.552 [2024-11-20 15:40:31.459854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.552 [2024-11-20 15:40:31.459872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.552 [2024-11-20 15:40:31.459881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.552 [2024-11-20 15:40:31.459888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.459906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.552 [2024-11-20 15:40:31.469790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.552 [2024-11-20 15:40:31.469854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.552 [2024-11-20 15:40:31.469872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.552 [2024-11-20 15:40:31.469880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.552 [2024-11-20 15:40:31.469887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.469910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.552 [2024-11-20 15:40:31.479838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.552 [2024-11-20 15:40:31.479898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.552 [2024-11-20 15:40:31.479915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.552 [2024-11-20 15:40:31.479923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.552 [2024-11-20 15:40:31.479931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.479948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.552 [2024-11-20 15:40:31.489766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.552 [2024-11-20 15:40:31.489833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.552 [2024-11-20 15:40:31.489850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.552 [2024-11-20 15:40:31.489858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.552 [2024-11-20 15:40:31.489865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.489882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.552 [2024-11-20 15:40:31.499881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.552 [2024-11-20 15:40:31.499956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.552 [2024-11-20 15:40:31.499972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.552 [2024-11-20 15:40:31.499980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.552 [2024-11-20 15:40:31.499987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.552 [2024-11-20 15:40:31.500006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.552 qpair failed and we were unable to recover it. 00:30:42.817 [2024-11-20 15:40:31.509903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.817 [2024-11-20 15:40:31.509978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.817 [2024-11-20 15:40:31.509994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.817 [2024-11-20 15:40:31.510003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.817 [2024-11-20 15:40:31.510010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.817 [2024-11-20 15:40:31.510027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.817 qpair failed and we were unable to recover it. 00:30:42.817 [2024-11-20 15:40:31.519947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.817 [2024-11-20 15:40:31.520020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.817 [2024-11-20 15:40:31.520036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.817 [2024-11-20 15:40:31.520044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.817 [2024-11-20 15:40:31.520051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.817 [2024-11-20 15:40:31.520068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.817 qpair failed and we were unable to recover it. 00:30:42.817 [2024-11-20 15:40:31.529968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.817 [2024-11-20 15:40:31.530026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.817 [2024-11-20 15:40:31.530041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.817 [2024-11-20 15:40:31.530049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.817 [2024-11-20 15:40:31.530056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.530072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.539987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.540044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.540060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.540067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.540074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.540090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.550018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.550083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.550098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.550105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.550112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.550128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.560033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.560093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.560112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.560120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.560127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.560143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.570073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.570134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.570149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.570157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.570170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.570186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.580107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.580167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.580182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.580190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.580197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.580212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.590126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.590210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.590225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.590232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.590239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.590256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.600134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.600195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.600210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.600217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.600227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.600242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.610154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.610213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.610228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.610235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.610242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.610258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.620156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.620214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.620228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.620235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.620242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.620257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.630222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.630274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.630288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.630296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.630303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.630317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.640237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.640287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.640301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.640308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.640314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.640329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.650174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.650226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.650240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.650247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.650254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.818 [2024-11-20 15:40:31.650269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.818 qpair failed and we were unable to recover it. 00:30:42.818 [2024-11-20 15:40:31.660273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.818 [2024-11-20 15:40:31.660340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.818 [2024-11-20 15:40:31.660354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.818 [2024-11-20 15:40:31.660362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.818 [2024-11-20 15:40:31.660370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.660386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.670314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.670368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.670381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.670388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.670395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.670410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.680286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.680382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.680396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.680403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.680410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.680424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.690379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.690481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.690499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.690506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.690513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.690528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.700424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.700473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.700487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.700494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.700501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.700515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.710449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.710503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.710516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.710523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.710530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.710544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.720446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.720499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.720512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.720519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.720526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.720540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.730467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.730523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.730536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.730544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.730554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.730569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.740532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.740583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.740597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.740604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.740610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.740625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.750555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.750608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.750621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.750628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.750635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.750649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.760568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.760627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.760640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.760647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.760653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.760668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:42.819 [2024-11-20 15:40:31.770497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.819 [2024-11-20 15:40:31.770547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.819 [2024-11-20 15:40:31.770560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.819 [2024-11-20 15:40:31.770567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.819 [2024-11-20 15:40:31.770573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:42.819 [2024-11-20 15:40:31.770587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.819 qpair failed and we were unable to recover it. 00:30:43.082 [2024-11-20 15:40:31.780653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.082 [2024-11-20 15:40:31.780707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.082 [2024-11-20 15:40:31.780720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.082 [2024-11-20 15:40:31.780728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.082 [2024-11-20 15:40:31.780734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.082 [2024-11-20 15:40:31.780749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.082 qpair failed and we were unable to recover it. 00:30:43.082 [2024-11-20 15:40:31.790673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.082 [2024-11-20 15:40:31.790729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.082 [2024-11-20 15:40:31.790743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.082 [2024-11-20 15:40:31.790750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.082 [2024-11-20 15:40:31.790756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.082 [2024-11-20 15:40:31.790770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.082 qpair failed and we were unable to recover it. 00:30:43.082 [2024-11-20 15:40:31.800611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.082 [2024-11-20 15:40:31.800660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.082 [2024-11-20 15:40:31.800673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.082 [2024-11-20 15:40:31.800681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.082 [2024-11-20 15:40:31.800688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.082 [2024-11-20 15:40:31.800701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.082 qpair failed and we were unable to recover it. 00:30:43.082 [2024-11-20 15:40:31.810720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.082 [2024-11-20 15:40:31.810773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.082 [2024-11-20 15:40:31.810788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.082 [2024-11-20 15:40:31.810795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.082 [2024-11-20 15:40:31.810802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.082 [2024-11-20 15:40:31.810817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.082 qpair failed and we were unable to recover it. 00:30:43.082 [2024-11-20 15:40:31.820728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.820802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.820816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.820823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.820830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.820845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.830781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.830835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.830849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.830856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.830862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.830877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.840781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.840851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.840876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.840885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.840893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.840913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.850835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.850895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.850911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.850918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.850925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.850942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.860866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.860923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.860936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.860948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.860955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.860970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.870887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.870944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.870957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.870965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.870971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.870986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.880904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.880958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.880971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.880978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.880985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.880999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.890949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.891002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.891016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.891023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.891029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.891043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.900983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.901030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.901043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.901050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.901057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.901075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.911020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.911076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.911090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.911097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.911104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.911119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.920875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.920923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.920936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.920944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.920950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.920965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.931018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.931077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.931090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.931097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.931104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.931119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.941088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.083 [2024-11-20 15:40:31.941138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.083 [2024-11-20 15:40:31.941151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.083 [2024-11-20 15:40:31.941162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.083 [2024-11-20 15:40:31.941169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.083 [2024-11-20 15:40:31.941184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.083 qpair failed and we were unable to recover it. 00:30:43.083 [2024-11-20 15:40:31.951117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:31.951180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:31.951194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:31.951201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:31.951208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:31.951223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:31.961131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:31.961186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:31.961201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:31.961208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:31.961215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:31.961229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:31.971164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:31.971220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:31.971234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:31.971241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:31.971248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:31.971263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:31.981189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:31.981243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:31.981257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:31.981264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:31.981271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:31.981285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:31.991226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:31.991318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:31.991331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:31.991342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:31.991349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:31.991363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:32.001218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:32.001271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:32.001284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:32.001291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:32.001298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:32.001313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:32.011262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:32.011315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:32.011328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:32.011335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:32.011342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:32.011356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:32.021301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:32.021351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:32.021364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:32.021371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:32.021378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:32.021393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.084 [2024-11-20 15:40:32.031351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.084 [2024-11-20 15:40:32.031409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.084 [2024-11-20 15:40:32.031422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.084 [2024-11-20 15:40:32.031429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.084 [2024-11-20 15:40:32.031436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.084 [2024-11-20 15:40:32.031454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.084 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.041337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.041395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.041408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.041415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.041422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.041437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.051357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.051414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.051427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.051434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.051440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.051454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.061295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.061356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.061369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.061376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.061382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.061397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.071458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.071527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.071540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.071547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.071554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.071569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.081454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.081507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.081521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.081528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.081534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.081549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.091511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.091564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.091577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.091584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.091590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.091605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.101406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.101465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.101478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.101486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.101492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.101507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.111538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.111591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.111605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.111612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.111619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.111633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.121526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.121574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.121590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.121598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.121604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.121619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.131583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.131634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.347 [2024-11-20 15:40:32.131647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.347 [2024-11-20 15:40:32.131655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.347 [2024-11-20 15:40:32.131661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.347 [2024-11-20 15:40:32.131676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.347 qpair failed and we were unable to recover it. 00:30:43.347 [2024-11-20 15:40:32.141633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.347 [2024-11-20 15:40:32.141686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.141700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.141707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.141713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.141728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.151536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.151591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.151604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.151612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.151618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.151633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.161656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.161703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.161716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.161723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.161737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.161753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.171720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.171774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.171787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.171795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.171801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.171816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.181707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.181765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.181778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.181786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.181792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.181807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.191766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.191823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.191837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.191844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.191851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.191865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.201743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.201791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.201804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.201812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.201818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.201833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.211712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.211761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.211774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.211781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.211787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.211801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.221844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.221894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.221908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.221915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.221922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.221937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.231864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.231923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.231936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.231943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.231950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.231964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.241889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.241949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.241963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.241970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.241976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.348 [2024-11-20 15:40:32.241991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.348 qpair failed and we were unable to recover it. 00:30:43.348 [2024-11-20 15:40:32.251933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.348 [2024-11-20 15:40:32.251989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.348 [2024-11-20 15:40:32.252005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.348 [2024-11-20 15:40:32.252013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.348 [2024-11-20 15:40:32.252019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.349 [2024-11-20 15:40:32.252034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.349 qpair failed and we were unable to recover it. 00:30:43.349 [2024-11-20 15:40:32.261957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.349 [2024-11-20 15:40:32.262007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.349 [2024-11-20 15:40:32.262020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.349 [2024-11-20 15:40:32.262027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.349 [2024-11-20 15:40:32.262034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.349 [2024-11-20 15:40:32.262048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.349 qpair failed and we were unable to recover it. 00:30:43.349 [2024-11-20 15:40:32.271978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.349 [2024-11-20 15:40:32.272050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.349 [2024-11-20 15:40:32.272063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.349 [2024-11-20 15:40:32.272070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.349 [2024-11-20 15:40:32.272077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.349 [2024-11-20 15:40:32.272091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.349 qpair failed and we were unable to recover it. 00:30:43.349 [2024-11-20 15:40:32.281994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.349 [2024-11-20 15:40:32.282046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.349 [2024-11-20 15:40:32.282059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.349 [2024-11-20 15:40:32.282067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.349 [2024-11-20 15:40:32.282073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.349 [2024-11-20 15:40:32.282088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.349 qpair failed and we were unable to recover it. 00:30:43.349 [2024-11-20 15:40:32.292053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.349 [2024-11-20 15:40:32.292100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.349 [2024-11-20 15:40:32.292114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.349 [2024-11-20 15:40:32.292121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.349 [2024-11-20 15:40:32.292131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.349 [2024-11-20 15:40:32.292146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.349 qpair failed and we were unable to recover it. 00:30:43.349 [2024-11-20 15:40:32.302065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.349 [2024-11-20 15:40:32.302122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.349 [2024-11-20 15:40:32.302135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.349 [2024-11-20 15:40:32.302143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.349 [2024-11-20 15:40:32.302149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.349 [2024-11-20 15:40:32.302167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.349 qpair failed and we were unable to recover it. 00:30:43.611 [2024-11-20 15:40:32.312041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.312093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.312106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.312114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.312120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.312135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.322037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.322126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.322139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.322147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.322154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.322173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.332154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.332207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.332220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.332228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.332235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.332249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.342177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.342227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.342240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.342248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.342254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.342269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.352255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.352308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.352322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.352329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.352336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.352350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.362183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.362233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.362246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.362253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.362260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.362275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.372144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.372200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.372213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.372220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.372227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.372241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.382271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.382323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.382336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.382343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.382350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.382364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.392237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.392291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.392304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.392311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.392318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.392333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.402298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.402352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.402365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.402372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.402379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.402393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.412392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.412451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.412465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.412472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.412479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.412493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.422435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.422517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.422530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.422542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.422548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.422563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.432417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.612 [2024-11-20 15:40:32.432469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.612 [2024-11-20 15:40:32.432482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.612 [2024-11-20 15:40:32.432489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.612 [2024-11-20 15:40:32.432496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.612 [2024-11-20 15:40:32.432510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.612 qpair failed and we were unable to recover it. 00:30:43.612 [2024-11-20 15:40:32.442439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.442493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.442506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.442513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.442520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.442534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.452495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.452547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.452560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.452567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.452574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.452588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.462515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.462566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.462579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.462586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.462593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.462610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.472554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.472610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.472623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.472630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.472637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.472651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.482538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.482601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.482615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.482623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.482629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.482644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.492602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.492650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.492663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.492670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.492677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.492692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.502616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.502663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.502676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.502684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.502690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.502705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.512675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.512737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.512750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.512758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.512764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.512779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.522670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.522722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.522736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.522744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.522750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.522765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.532584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.532638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.532651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.532658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.532665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.532679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.542716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.542769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.542782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.542789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.542797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.542811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.552720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.552773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.552789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.552796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.552803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.613 [2024-11-20 15:40:32.552818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.613 qpair failed and we were unable to recover it. 00:30:43.613 [2024-11-20 15:40:32.562638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.613 [2024-11-20 15:40:32.562687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.613 [2024-11-20 15:40:32.562699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.613 [2024-11-20 15:40:32.562707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.613 [2024-11-20 15:40:32.562713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.614 [2024-11-20 15:40:32.562728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.614 qpair failed and we were unable to recover it. 00:30:43.876 [2024-11-20 15:40:32.572812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.876 [2024-11-20 15:40:32.572864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.876 [2024-11-20 15:40:32.572877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.876 [2024-11-20 15:40:32.572885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.876 [2024-11-20 15:40:32.572891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.876 [2024-11-20 15:40:32.572905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.876 qpair failed and we were unable to recover it. 00:30:43.876 [2024-11-20 15:40:32.582830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.876 [2024-11-20 15:40:32.582885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.876 [2024-11-20 15:40:32.582898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.876 [2024-11-20 15:40:32.582905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.876 [2024-11-20 15:40:32.582912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.876 [2024-11-20 15:40:32.582926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.876 qpair failed and we were unable to recover it. 00:30:43.876 [2024-11-20 15:40:32.592929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.592985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.592998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.593005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.593011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.593030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.602863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.602917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.602930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.602938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.602944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.602959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.612925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.612977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.612990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.612997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.613004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.613018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.622969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.623060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.623072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.623080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.623087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.623101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.632965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.633052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.633065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.633072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.633079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.633093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.642981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.643062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.643075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.643082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.643089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.643103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.653040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.653096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.653110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.653117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.653124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.653138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.663080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.663133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.663147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.663156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.663167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.663181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.673103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.673162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.673175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.673182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.673189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.673204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.683104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.683162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.683178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.683186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.683192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.683206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.877 [2024-11-20 15:40:32.693157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.877 [2024-11-20 15:40:32.693210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.877 [2024-11-20 15:40:32.693223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.877 [2024-11-20 15:40:32.693230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.877 [2024-11-20 15:40:32.693237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.877 [2024-11-20 15:40:32.693251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.877 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.703176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.703233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.703246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.703253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.703260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.703274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.713217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.713270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.713283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.713290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.713296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.713311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.723203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.723257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.723270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.723277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.723287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.723302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.733225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.733270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.733283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.733290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.733297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.733311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.743294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.743342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.743355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.743363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.743369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.743384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.753322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.753374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.753387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.753394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.753401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.753416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.763286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.763336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.763349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.763356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.763363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.763377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.773340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.773388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.773401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.773409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.773415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.773429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.783452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.783545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.783558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.783566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.783573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.783587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.793439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.793494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.793507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.793514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.793520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.793535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.803451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.803503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.878 [2024-11-20 15:40:32.803516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.878 [2024-11-20 15:40:32.803523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.878 [2024-11-20 15:40:32.803529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.878 [2024-11-20 15:40:32.803544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.878 qpair failed and we were unable to recover it. 00:30:43.878 [2024-11-20 15:40:32.813450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.878 [2024-11-20 15:40:32.813493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.879 [2024-11-20 15:40:32.813509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.879 [2024-11-20 15:40:32.813517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.879 [2024-11-20 15:40:32.813523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.879 [2024-11-20 15:40:32.813537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.879 qpair failed and we were unable to recover it. 00:30:43.879 [2024-11-20 15:40:32.823528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.879 [2024-11-20 15:40:32.823580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.879 [2024-11-20 15:40:32.823594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.879 [2024-11-20 15:40:32.823601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.879 [2024-11-20 15:40:32.823608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.879 [2024-11-20 15:40:32.823626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.879 qpair failed and we were unable to recover it. 00:30:43.879 [2024-11-20 15:40:32.833556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.879 [2024-11-20 15:40:32.833611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.879 [2024-11-20 15:40:32.833624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.879 [2024-11-20 15:40:32.833631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.879 [2024-11-20 15:40:32.833638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:43.879 [2024-11-20 15:40:32.833653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.879 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.843558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.843609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.843622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.843629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.843636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.843650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.853550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.853594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.853607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.853618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.853625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.853639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.863640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.863691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.863704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.863711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.863717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.863732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.873702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.873755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.873768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.873776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.873782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.873796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.883660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.883712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.883725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.883732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.883738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.883753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.893663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.893710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.893724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.893731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.893737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.893752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.903733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.903785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.903798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.903805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.903811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.903826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.913772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.913825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.913838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.913846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.913853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.913867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.923756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.923804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.923817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.923824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.923831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.923845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.933762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.933831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.933844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.933852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.933858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.933873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.141 qpair failed and we were unable to recover it. 00:30:44.141 [2024-11-20 15:40:32.943876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.141 [2024-11-20 15:40:32.943971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.141 [2024-11-20 15:40:32.943989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.141 [2024-11-20 15:40:32.943996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.141 [2024-11-20 15:40:32.944003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.141 [2024-11-20 15:40:32.944019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:32.953880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:32.953939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:32.953964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:32.953973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:32.953981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:32.954002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:32.963886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:32.963974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:32.964000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:32.964009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:32.964016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:32.964037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:32.973771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:32.973819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:32.973834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:32.973842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:32.973849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:32.973865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:32.983958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:32.984009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:32.984022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:32.984034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:32.984041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:32.984056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:32.993999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:32.994056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:32.994070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:32.994077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:32.994084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:32.994099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.004008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.004056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:33.004069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:33.004077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:33.004084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:33.004099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.014003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.014051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:33.014064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:33.014071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:33.014078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:33.014093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.024064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.024117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:33.024130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:33.024137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:33.024144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:33.024166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.034112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.034193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:33.034206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:33.034214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:33.034221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:33.034236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.044076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.044130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:33.044144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:33.044151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:33.044161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:33.044177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.054169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.054241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.142 [2024-11-20 15:40:33.054254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.142 [2024-11-20 15:40:33.054261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.142 [2024-11-20 15:40:33.054267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.142 [2024-11-20 15:40:33.054283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.142 qpair failed and we were unable to recover it. 00:30:44.142 [2024-11-20 15:40:33.064186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.142 [2024-11-20 15:40:33.064236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.143 [2024-11-20 15:40:33.064249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.143 [2024-11-20 15:40:33.064256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.143 [2024-11-20 15:40:33.064263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.143 [2024-11-20 15:40:33.064277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.143 qpair failed and we were unable to recover it. 00:30:44.143 [2024-11-20 15:40:33.074211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.143 [2024-11-20 15:40:33.074276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.143 [2024-11-20 15:40:33.074290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.143 [2024-11-20 15:40:33.074297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.143 [2024-11-20 15:40:33.074303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.143 [2024-11-20 15:40:33.074318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.143 qpair failed and we were unable to recover it. 00:30:44.143 [2024-11-20 15:40:33.084206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.143 [2024-11-20 15:40:33.084254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.143 [2024-11-20 15:40:33.084267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.143 [2024-11-20 15:40:33.084274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.143 [2024-11-20 15:40:33.084281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.143 [2024-11-20 15:40:33.084295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.143 qpair failed and we were unable to recover it. 00:30:44.143 [2024-11-20 15:40:33.094233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.143 [2024-11-20 15:40:33.094287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.143 [2024-11-20 15:40:33.094301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.143 [2024-11-20 15:40:33.094308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.143 [2024-11-20 15:40:33.094314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.143 [2024-11-20 15:40:33.094329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.143 qpair failed and we were unable to recover it. 00:30:44.404 [2024-11-20 15:40:33.104295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.404 [2024-11-20 15:40:33.104350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.404 [2024-11-20 15:40:33.104364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.404 [2024-11-20 15:40:33.104371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.404 [2024-11-20 15:40:33.104378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.404 [2024-11-20 15:40:33.104393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.404 qpair failed and we were unable to recover it. 00:30:44.404 [2024-11-20 15:40:33.114320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.404 [2024-11-20 15:40:33.114374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.404 [2024-11-20 15:40:33.114391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.404 [2024-11-20 15:40:33.114398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.404 [2024-11-20 15:40:33.114405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.404 [2024-11-20 15:40:33.114419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.404 qpair failed and we were unable to recover it. 00:30:44.404 [2024-11-20 15:40:33.124294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.124379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.124392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.124399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.124406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.124420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.134317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.134373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.134386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.134393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.134399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.134414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.144403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.144452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.144466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.144473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.144480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.144495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.154329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.154383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.154396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.154403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.154410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.154428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.164484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.164532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.164545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.164552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.164559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.164574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.174431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.174486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.174500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.174507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.174514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.174528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.184500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.184558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.184572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.184580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.184587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.184606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.194502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.194568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.194582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.194589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.194596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.194610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.204530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.204580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.204593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.204601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.204607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.204622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.214531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.214585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.214598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.214606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.214612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.214627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.224630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.224711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.224725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.224732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.224739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.224753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.234649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.234746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.234760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.234767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.234774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.234789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.405 [2024-11-20 15:40:33.244629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.405 [2024-11-20 15:40:33.244677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.405 [2024-11-20 15:40:33.244694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.405 [2024-11-20 15:40:33.244702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.405 [2024-11-20 15:40:33.244708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.405 [2024-11-20 15:40:33.244723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.405 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.254675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.254723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.254737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.254744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.254751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.254765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.264601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.264654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.264667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.264674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.264681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.264695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.274757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.274815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.274828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.274836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.274842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.274856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.284765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.284820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.284833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.284840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.284850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.284865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.294787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.294837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.294850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.294858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.294864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.294879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.304822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.304880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.304893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.304900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.304907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.304921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.314877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.314931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.314943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.314951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.314957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.314971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.324869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.324930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.324943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.324950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.324957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.324972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.334829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.334874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.334887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.334895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.334901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.334916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.344920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.344980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.345005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.345014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.345022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.345043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.406 [2024-11-20 15:40:33.354858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.406 [2024-11-20 15:40:33.354958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.406 [2024-11-20 15:40:33.354974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.406 [2024-11-20 15:40:33.354982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.406 [2024-11-20 15:40:33.354989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.406 [2024-11-20 15:40:33.355005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.406 qpair failed and we were unable to recover it. 00:30:44.669 [2024-11-20 15:40:33.364988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-11-20 15:40:33.365037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-11-20 15:40:33.365051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-11-20 15:40:33.365059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-11-20 15:40:33.365066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.669 [2024-11-20 15:40:33.365081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-11-20 15:40:33.374995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-11-20 15:40:33.375047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-11-20 15:40:33.375065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-11-20 15:40:33.375072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-11-20 15:40:33.375079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.669 [2024-11-20 15:40:33.375093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-11-20 15:40:33.385053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-11-20 15:40:33.385102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-11-20 15:40:33.385116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-11-20 15:40:33.385123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-11-20 15:40:33.385130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.669 [2024-11-20 15:40:33.385145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-11-20 15:40:33.395039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-11-20 15:40:33.395126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-11-20 15:40:33.395140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-11-20 15:40:33.395148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-11-20 15:40:33.395154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.669 [2024-11-20 15:40:33.395173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.669 qpair failed and we were unable to recover it. 00:30:44.669 [2024-11-20 15:40:33.405068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.669 [2024-11-20 15:40:33.405115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.669 [2024-11-20 15:40:33.405128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.669 [2024-11-20 15:40:33.405135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.669 [2024-11-20 15:40:33.405142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.405156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.415084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.415133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.415146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.415161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.415168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.415183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.425151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.425203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.425216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.425224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.425230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.425244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.435111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.435155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.435172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.435180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.435186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.435200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.445134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.445180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.445195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.445202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.445209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.445227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.455189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.455284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.455298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.455305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.455312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.455327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.465263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.465314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.465327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.465335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.465342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.465357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.475271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.475327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.475341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.475348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.475355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.475370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.485301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.485352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.485365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.485372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.485378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.485393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.495188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.495234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.495247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.495255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.495261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.495276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.505388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.505437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.505451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.505458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.505465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.505479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.515379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.515424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.515437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.515444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.515451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.515465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.525404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.525448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.525461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.525468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.525475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.670 [2024-11-20 15:40:33.525489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.670 qpair failed and we were unable to recover it. 00:30:44.670 [2024-11-20 15:40:33.535415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.670 [2024-11-20 15:40:33.535467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.670 [2024-11-20 15:40:33.535480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.670 [2024-11-20 15:40:33.535488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.670 [2024-11-20 15:40:33.535494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.535509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.545450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.545521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.545535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.545545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.545553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.545567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.555476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.555524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.555537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.555544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.555551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.555565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.565498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.565551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.565565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.565572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.565578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.565593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.575527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.575575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.575588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.575596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.575602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.575617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.585596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.585651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.585664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.585672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.585679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.585697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.595589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.595640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.595653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.595660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.595667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.595682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.605642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.605688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.605701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.605709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.605716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.605730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.615595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.615644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.615657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.615665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.615671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.615686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.671 [2024-11-20 15:40:33.625670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.671 [2024-11-20 15:40:33.625767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.671 [2024-11-20 15:40:33.625781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.671 [2024-11-20 15:40:33.625788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.671 [2024-11-20 15:40:33.625795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.671 [2024-11-20 15:40:33.625810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.671 qpair failed and we were unable to recover it. 00:30:44.934 [2024-11-20 15:40:33.635672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.934 [2024-11-20 15:40:33.635722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.934 [2024-11-20 15:40:33.635736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.934 [2024-11-20 15:40:33.635743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.934 [2024-11-20 15:40:33.635750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.934 [2024-11-20 15:40:33.635765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.934 qpair failed and we were unable to recover it. 00:30:44.934 [2024-11-20 15:40:33.645594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.934 [2024-11-20 15:40:33.645638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.934 [2024-11-20 15:40:33.645652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.934 [2024-11-20 15:40:33.645660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.934 [2024-11-20 15:40:33.645667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.934 [2024-11-20 15:40:33.645682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.934 qpair failed and we were unable to recover it. 00:30:44.934 [2024-11-20 15:40:33.655729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.934 [2024-11-20 15:40:33.655778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.934 [2024-11-20 15:40:33.655791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.934 [2024-11-20 15:40:33.655798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.934 [2024-11-20 15:40:33.655805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.934 [2024-11-20 15:40:33.655819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.934 qpair failed and we were unable to recover it. 00:30:44.934 [2024-11-20 15:40:33.665680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.665726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.665741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.665749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.665755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.665771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.675824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.675897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.675919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.675927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.675935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.675950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.685816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.685879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.685903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.685912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.685920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.685941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.695852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.695944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.695961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.695969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.695980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.695997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.705878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.705928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.705953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.705962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.705970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.705990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.715785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.715875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.715900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.715910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.715922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.715942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.725950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.725999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.726015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.726023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.726030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.726046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.735936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.735985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.735999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.736006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.736013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.736028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.745965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.746009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.746023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.746031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.746037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.746052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.756013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.756060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.756073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.756080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.756087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.756102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.766026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.766090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.766104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.766111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.766118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.766133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.776077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.776131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.776145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.776152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.776163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.776179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.786077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.935 [2024-11-20 15:40:33.786122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.935 [2024-11-20 15:40:33.786135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.935 [2024-11-20 15:40:33.786142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.935 [2024-11-20 15:40:33.786149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.935 [2024-11-20 15:40:33.786167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.935 qpair failed and we were unable to recover it. 00:30:44.935 [2024-11-20 15:40:33.796114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.796165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.796179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.796186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.796193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.796207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.806034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.806083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.806102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.806110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.806117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.806133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.816184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.816231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.816245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.816252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.816259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.816274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.826214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.826262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.826275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.826283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.826289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.826304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.836254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.836299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.836313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.836320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.836327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.836343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.846272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.846324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.846337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.846345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.846355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.846370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.856310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.856385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.856398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.856406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.856413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.856428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.866318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.866362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.866378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.866386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.866393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.866409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.876348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.876394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.876407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.876414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.876421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.876436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:44.936 [2024-11-20 15:40:33.886366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.936 [2024-11-20 15:40:33.886412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.936 [2024-11-20 15:40:33.886426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.936 [2024-11-20 15:40:33.886433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.936 [2024-11-20 15:40:33.886440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:44.936 [2024-11-20 15:40:33.886455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:44.936 qpair failed and we were unable to recover it. 00:30:45.201 [2024-11-20 15:40:33.896404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.201 [2024-11-20 15:40:33.896478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.201 [2024-11-20 15:40:33.896492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.201 [2024-11-20 15:40:33.896499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.201 [2024-11-20 15:40:33.896506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.201 [2024-11-20 15:40:33.896521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.201 qpair failed and we were unable to recover it. 00:30:45.201 [2024-11-20 15:40:33.906444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.201 [2024-11-20 15:40:33.906486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.201 [2024-11-20 15:40:33.906500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.201 [2024-11-20 15:40:33.906507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.201 [2024-11-20 15:40:33.906513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.201 [2024-11-20 15:40:33.906528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.201 qpair failed and we were unable to recover it. 00:30:45.201 [2024-11-20 15:40:33.916459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.201 [2024-11-20 15:40:33.916505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.201 [2024-11-20 15:40:33.916518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.201 [2024-11-20 15:40:33.916525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.201 [2024-11-20 15:40:33.916532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.201 [2024-11-20 15:40:33.916546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.926394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.926442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.926455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.926462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.926468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.926483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.936509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.936555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.936572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.936579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.936586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.936600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.946539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.946581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.946595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.946602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.946609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.946623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.956560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.956608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.956620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.956628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.956635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.956649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.966599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.966693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.966707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.966714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.966721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.966737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.976614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.976658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.976671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.976682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.976689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.976703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.986692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.986768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.986780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.986788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.986794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.986810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:33.996677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:33.996723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:33.996736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:33.996743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:33.996750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:33.996765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:34.006720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:34.006776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:34.006789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:34.006797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:34.006803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:34.006818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:34.016738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:34.016827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:34.016841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:34.016849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:34.016855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:34.016870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:34.026751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:34.026851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:34.026866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:34.026873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:34.026880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:34.026898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:34.036735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:34.036818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:34.036831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:34.036839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:34.036847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.202 [2024-11-20 15:40:34.036862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.202 qpair failed and we were unable to recover it. 00:30:45.202 [2024-11-20 15:40:34.046828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.202 [2024-11-20 15:40:34.046874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.202 [2024-11-20 15:40:34.046888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.202 [2024-11-20 15:40:34.046895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.202 [2024-11-20 15:40:34.046902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.046916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.056834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.056885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.056910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.056919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.056927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.056948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.066875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.066946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.066962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.066969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.066976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.066993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.076893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.076943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.076968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.076978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.076985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.077006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.086920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.086965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.086981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.086988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.086995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.087011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.096957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.096999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.097013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.097020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.097027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.097043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.106969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.107010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.107024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.107036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.107043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.107058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.117021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.117070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.117083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.117090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.117096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.117111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.127038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.127084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.127097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.127105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.127111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.127126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.137073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.137122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.137136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.137143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.137149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.137177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.147033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.147074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.147088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.147095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.147102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.147120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.203 [2024-11-20 15:40:34.157112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.203 [2024-11-20 15:40:34.157162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.203 [2024-11-20 15:40:34.157176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.203 [2024-11-20 15:40:34.157183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.203 [2024-11-20 15:40:34.157189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.203 [2024-11-20 15:40:34.157204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.203 qpair failed and we were unable to recover it. 00:30:45.465 [2024-11-20 15:40:34.167107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.465 [2024-11-20 15:40:34.167155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.465 [2024-11-20 15:40:34.167173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.465 [2024-11-20 15:40:34.167181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.465 [2024-11-20 15:40:34.167187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.465 [2024-11-20 15:40:34.167203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.465 qpair failed and we were unable to recover it. 00:30:45.465 [2024-11-20 15:40:34.177171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.465 [2024-11-20 15:40:34.177213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.465 [2024-11-20 15:40:34.177235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.465 [2024-11-20 15:40:34.177242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.177249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.177264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.187153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.187201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.187215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.187222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.187228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.187243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.197192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.197239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.197252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.197259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.197266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.197281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.207258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.207308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.207321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.207328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.207335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.207350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.217253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.217299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.217312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.217320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.217326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.217341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.227266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.227308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.227321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.227328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.227335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.227349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.237334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.237406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.237423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.237430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.237438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.237453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.247374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.247449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.247464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.247472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.247479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.247498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.257350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.257398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.257413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.257420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.257427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.257442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.267455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.267513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.267526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.267534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.267540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.267555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.277447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.277496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.277509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.277516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.277526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.277541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.287518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.287574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.287587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.287594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.287601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.287615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.297501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.297546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.297559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.297566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.297572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.297587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.307517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.307599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.307612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.307620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.307627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.307642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.317545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.317594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.317607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.317614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.317621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.317635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.327561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.327612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.327625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.327632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.327639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.327653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.337596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.337642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.337655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.337663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.337669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.337683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.347604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.347649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.347662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.347669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.347676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.347690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.357664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.357709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.357722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.357730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.357736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.357751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.367690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.367735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.367755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.367762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.367768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.367783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.377717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.377767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.377780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.377787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.377794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.377808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.387780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.387866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.387880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.387887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.387894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.387908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.397759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.397803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.397817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.397824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.397831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.397846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.407831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.407879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.407893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.407900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.407910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.407924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.466 [2024-11-20 15:40:34.417821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.466 [2024-11-20 15:40:34.417891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.466 [2024-11-20 15:40:34.417904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.466 [2024-11-20 15:40:34.417912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.466 [2024-11-20 15:40:34.417918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.466 [2024-11-20 15:40:34.417934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.466 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.427845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.427886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.427899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.427907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.427914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.427928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.437847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.437907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.437931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.437940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.437947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.437967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.447907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.447953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.447967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.447975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.447982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.447997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.457889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.457945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.457970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.457979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.457986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.458006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.467962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.468007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.468022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.468030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.468036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.468053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.477989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.478039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.478053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.478060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.478067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.478082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.488010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.488059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.488072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.488080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.488087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.488101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.498024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.498075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.498093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.498100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.498107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.498122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.508042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.508089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.508103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.508110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.508117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.508131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.517955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.518003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.518018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.518025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.518032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.518048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.528111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.528161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.528176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.528183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.528189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.528204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.538189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.538233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.538247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.538258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.538265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.729 [2024-11-20 15:40:34.538279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.729 qpair failed and we were unable to recover it. 00:30:45.729 [2024-11-20 15:40:34.548127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.729 [2024-11-20 15:40:34.548178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.729 [2024-11-20 15:40:34.548192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.729 [2024-11-20 15:40:34.548199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.729 [2024-11-20 15:40:34.548206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.548221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.558172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.558220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.558233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.558240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.558247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.558261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.568254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.568304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.568317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.568325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.568331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.568346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.578279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.578324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.578337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.578345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.578351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.578366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.588303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.588379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.588392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.588399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.588405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.588420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.598321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.598366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.598379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.598386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.598392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.598407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.608343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.608390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.608403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.608410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.608417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.608431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.618323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.618368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.618381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.618388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.618394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.618408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.628375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.628424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.628437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.628444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.628451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.628465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.638397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.638443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.638456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.638463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.638470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.638484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.648433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.648480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.648494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.648501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.648508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.648523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.658483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.658528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.658541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.658549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.658555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.658570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.668558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.668606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.668619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.668630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.668637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.730 [2024-11-20 15:40:34.668652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-11-20 15:40:34.678517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.730 [2024-11-20 15:40:34.678560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.730 [2024-11-20 15:40:34.678573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.730 [2024-11-20 15:40:34.678581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.730 [2024-11-20 15:40:34.678588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.731 [2024-11-20 15:40:34.678603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.688574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.688647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.688660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.688668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.688676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.688691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.698587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.698633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.698646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.698654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.698660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.698674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.708674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.708719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.708732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.708740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.708746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.708764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.718511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.718555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.718569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.718576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.718583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.718598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.728719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.728764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.728777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.728785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.728792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.728806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.738700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.738746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.738759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.738766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.738772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.738786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.748692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.748734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.748747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.748754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.748761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.748775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.758738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.758783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.758797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.758804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.758810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.758825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.768761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.768806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.768819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.768826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.768832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.768847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.778822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.778869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.778882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.778889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.778896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.778910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.788848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.788904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.788928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.788937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.788945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.788965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.993 [2024-11-20 15:40:34.798862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.993 [2024-11-20 15:40:34.798915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.993 [2024-11-20 15:40:34.798944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.993 [2024-11-20 15:40:34.798953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.993 [2024-11-20 15:40:34.798960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.993 [2024-11-20 15:40:34.798981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.993 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.808888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.808945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.808970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.808979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.808986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.809006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.818913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.818964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.818979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.818987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.818993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.819009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.828914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.828964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.828978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.828985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.828992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.829007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.838957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.839004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.839018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.839025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.839036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.839052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.848989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.849037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.849051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.849058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.849064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.849079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.859003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.859058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.859071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.859078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.859085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.859100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.869013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.869060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.869074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.869081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.869088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.869103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.879045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.879090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.879103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.879111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.879117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.879132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.889119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.889170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.889183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.889190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.889197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.889211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.899126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.899171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.899184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.899192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.899198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.899213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.909156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.909205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.909218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.909225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.909232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.909247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.919180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.919251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.919264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.919272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.919279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.994 [2024-11-20 15:40:34.919294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.994 qpair failed and we were unable to recover it. 00:30:45.994 [2024-11-20 15:40:34.929210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.994 [2024-11-20 15:40:34.929255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.994 [2024-11-20 15:40:34.929273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.994 [2024-11-20 15:40:34.929280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.994 [2024-11-20 15:40:34.929287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.995 [2024-11-20 15:40:34.929302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.995 qpair failed and we were unable to recover it. 00:30:45.995 [2024-11-20 15:40:34.939194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.995 [2024-11-20 15:40:34.939238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.995 [2024-11-20 15:40:34.939252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.995 [2024-11-20 15:40:34.939259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.995 [2024-11-20 15:40:34.939266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.995 [2024-11-20 15:40:34.939281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.995 qpair failed and we were unable to recover it. 00:30:45.995 [2024-11-20 15:40:34.949251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.995 [2024-11-20 15:40:34.949297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.995 [2024-11-20 15:40:34.949311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.995 [2024-11-20 15:40:34.949318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.995 [2024-11-20 15:40:34.949324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:45.995 [2024-11-20 15:40:34.949338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.995 qpair failed and we were unable to recover it. 00:30:46.256 [2024-11-20 15:40:34.959259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.256 [2024-11-20 15:40:34.959306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.256 [2024-11-20 15:40:34.959319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.256 [2024-11-20 15:40:34.959326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.256 [2024-11-20 15:40:34.959333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8e88000b90 00:30:46.256 [2024-11-20 15:40:34.959347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:46.256 qpair failed and we were unable to recover it. 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Write completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Write completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.256 Read completed with error (sct=0, sc=8) 00:30:46.256 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Write completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 Read completed with error (sct=0, sc=8) 00:30:46.257 starting I/O failed 00:30:46.257 [2024-11-20 15:40:34.960290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.257 [2024-11-20 15:40:34.969320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.257 [2024-11-20 15:40:34.969418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.257 [2024-11-20 15:40:34.969481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.257 [2024-11-20 15:40:34.969506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.257 [2024-11-20 15:40:34.969527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc650c0 00:30:46.257 [2024-11-20 15:40:34.969583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:46.257 qpair failed and we were unable to recover it. 00:30:46.257 [2024-11-20 15:40:34.979343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.257 [2024-11-20 15:40:34.979406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.257 [2024-11-20 15:40:34.979437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.257 [2024-11-20 15:40:34.979452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.257 [2024-11-20 15:40:34.979467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc650c0 00:30:46.257 [2024-11-20 15:40:34.979499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:46.257 qpair failed and we were unable to recover it. 00:30:46.257 [2024-11-20 15:40:34.979659] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:46.257 A controller has encountered a failure and is being reset. 00:30:46.257 [2024-11-20 15:40:34.979779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5ae00 (9): Bad file descriptor 00:30:46.257 Controller properly reset. 00:30:46.257 Initializing NVMe Controllers 00:30:46.257 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:46.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:46.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:46.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:46.257 Initialization complete. Launching workers. 00:30:46.257 Starting thread on core 1 00:30:46.257 Starting thread on core 2 00:30:46.257 Starting thread on core 3 00:30:46.257 Starting thread on core 0 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:46.257 00:30:46.257 real 0m11.532s 00:30:46.257 user 0m22.067s 00:30:46.257 sys 0m3.878s 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.257 ************************************ 00:30:46.257 END TEST nvmf_target_disconnect_tc2 00:30:46.257 ************************************ 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.257 rmmod nvme_tcp 00:30:46.257 rmmod nvme_fabrics 00:30:46.257 rmmod nvme_keyring 00:30:46.257 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 793835 ']' 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 793835 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 793835 ']' 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 793835 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793835 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793835' 00:30:46.518 killing process with pid 793835 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 793835 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 793835 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.518 15:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.060 15:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.060 00:30:49.060 real 0m21.947s 00:30:49.060 user 0m50.274s 00:30:49.060 sys 0m10.042s 00:30:49.060 15:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.060 15:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:49.060 ************************************ 00:30:49.060 END TEST nvmf_target_disconnect 00:30:49.060 ************************************ 00:30:49.060 15:40:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:49.060 00:30:49.060 real 6m33.360s 00:30:49.060 user 11m34.714s 00:30:49.060 sys 2m15.377s 00:30:49.060 15:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.060 15:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.060 ************************************ 00:30:49.060 END TEST nvmf_host 00:30:49.060 ************************************ 00:30:49.060 15:40:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:49.060 15:40:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:49.060 15:40:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:49.060 15:40:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:49.060 15:40:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.060 15:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:49.060 ************************************ 00:30:49.060 START TEST nvmf_target_core_interrupt_mode 00:30:49.060 ************************************ 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:49.060 * Looking for test storage... 00:30:49.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.060 --rc genhtml_branch_coverage=1 00:30:49.060 --rc genhtml_function_coverage=1 00:30:49.060 --rc genhtml_legend=1 00:30:49.060 --rc geninfo_all_blocks=1 00:30:49.060 --rc geninfo_unexecuted_blocks=1 00:30:49.060 00:30:49.060 ' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.060 --rc genhtml_branch_coverage=1 00:30:49.060 --rc genhtml_function_coverage=1 00:30:49.060 --rc genhtml_legend=1 00:30:49.060 --rc geninfo_all_blocks=1 00:30:49.060 --rc geninfo_unexecuted_blocks=1 00:30:49.060 00:30:49.060 ' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.060 --rc genhtml_branch_coverage=1 00:30:49.060 --rc genhtml_function_coverage=1 00:30:49.060 --rc genhtml_legend=1 00:30:49.060 --rc geninfo_all_blocks=1 00:30:49.060 --rc geninfo_unexecuted_blocks=1 00:30:49.060 00:30:49.060 ' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.060 --rc genhtml_branch_coverage=1 00:30:49.060 --rc genhtml_function_coverage=1 00:30:49.060 --rc genhtml_legend=1 00:30:49.060 --rc geninfo_all_blocks=1 00:30:49.060 --rc geninfo_unexecuted_blocks=1 00:30:49.060 00:30:49.060 ' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.060 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:49.061 ************************************ 00:30:49.061 START TEST nvmf_abort 00:30:49.061 ************************************ 00:30:49.061 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:49.061 * Looking for test storage... 00:30:49.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.061 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.061 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.061 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.322 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.323 --rc genhtml_branch_coverage=1 00:30:49.323 --rc genhtml_function_coverage=1 00:30:49.323 --rc genhtml_legend=1 00:30:49.323 --rc geninfo_all_blocks=1 00:30:49.323 --rc geninfo_unexecuted_blocks=1 00:30:49.323 00:30:49.323 ' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.323 --rc genhtml_branch_coverage=1 00:30:49.323 --rc genhtml_function_coverage=1 00:30:49.323 --rc genhtml_legend=1 00:30:49.323 --rc geninfo_all_blocks=1 00:30:49.323 --rc geninfo_unexecuted_blocks=1 00:30:49.323 00:30:49.323 ' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.323 --rc genhtml_branch_coverage=1 00:30:49.323 --rc genhtml_function_coverage=1 00:30:49.323 --rc genhtml_legend=1 00:30:49.323 --rc geninfo_all_blocks=1 00:30:49.323 --rc geninfo_unexecuted_blocks=1 00:30:49.323 00:30:49.323 ' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.323 --rc genhtml_branch_coverage=1 00:30:49.323 --rc genhtml_function_coverage=1 00:30:49.323 --rc genhtml_legend=1 00:30:49.323 --rc geninfo_all_blocks=1 00:30:49.323 --rc geninfo_unexecuted_blocks=1 00:30:49.323 00:30:49.323 ' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.323 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:57.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:57.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:57.593 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.593 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:57.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:30:57.594 00:30:57.594 --- 10.0.0.2 ping statistics --- 00:30:57.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.594 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:57.594 00:30:57.594 --- 10.0.0.1 ping statistics --- 00:30:57.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.594 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=799420 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 799420 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 799420 ']' 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.594 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.594 [2024-11-20 15:40:45.703249] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:57.594 [2024-11-20 15:40:45.704402] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:30:57.594 [2024-11-20 15:40:45.704456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.594 [2024-11-20 15:40:45.808054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.594 [2024-11-20 15:40:45.858905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.594 [2024-11-20 15:40:45.858955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.594 [2024-11-20 15:40:45.858964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.594 [2024-11-20 15:40:45.858971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.594 [2024-11-20 15:40:45.858978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.594 [2024-11-20 15:40:45.861056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.594 [2024-11-20 15:40:45.861221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.594 [2024-11-20 15:40:45.861258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.594 [2024-11-20 15:40:45.937572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.594 [2024-11-20 15:40:45.938429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:57.594 [2024-11-20 15:40:45.938671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:57.594 [2024-11-20 15:40:45.938907] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:57.594 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.594 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:57.594 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.594 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.594 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 [2024-11-20 15:40:46.562264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 Malloc0 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 Delay0 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 [2024-11-20 15:40:46.662211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.856 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:57.856 [2024-11-20 15:40:46.805893] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:00.408 Initializing NVMe Controllers 00:31:00.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:00.408 controller IO queue size 128 less than required 00:31:00.408 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:00.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:00.408 Initialization complete. Launching workers. 00:31:00.408 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28516 00:31:00.408 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28573, failed to submit 66 00:31:00.408 success 28516, unsuccessful 57, failed 0 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:00.408 15:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:00.408 rmmod nvme_tcp 00:31:00.408 rmmod nvme_fabrics 00:31:00.408 rmmod nvme_keyring 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 799420 ']' 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 799420 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 799420 ']' 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 799420 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 799420 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 799420' 00:31:00.408 killing process with pid 799420 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 799420 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 799420 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.408 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:02.956 00:31:02.956 real 0m13.442s 00:31:02.956 user 0m11.091s 00:31:02.956 sys 0m7.018s 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:02.956 ************************************ 00:31:02.956 END TEST nvmf_abort 00:31:02.956 ************************************ 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:02.956 ************************************ 00:31:02.956 START TEST nvmf_ns_hotplug_stress 00:31:02.956 ************************************ 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:02.956 * Looking for test storage... 00:31:02.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.956 --rc genhtml_branch_coverage=1 00:31:02.956 --rc genhtml_function_coverage=1 00:31:02.956 --rc genhtml_legend=1 00:31:02.956 --rc geninfo_all_blocks=1 00:31:02.956 --rc geninfo_unexecuted_blocks=1 00:31:02.956 00:31:02.956 ' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.956 --rc genhtml_branch_coverage=1 00:31:02.956 --rc genhtml_function_coverage=1 00:31:02.956 --rc genhtml_legend=1 00:31:02.956 --rc geninfo_all_blocks=1 00:31:02.956 --rc geninfo_unexecuted_blocks=1 00:31:02.956 00:31:02.956 ' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.956 --rc genhtml_branch_coverage=1 00:31:02.956 --rc genhtml_function_coverage=1 00:31:02.956 --rc genhtml_legend=1 00:31:02.956 --rc geninfo_all_blocks=1 00:31:02.956 --rc geninfo_unexecuted_blocks=1 00:31:02.956 00:31:02.956 ' 00:31:02.956 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.956 --rc genhtml_branch_coverage=1 00:31:02.956 --rc genhtml_function_coverage=1 00:31:02.956 --rc genhtml_legend=1 00:31:02.956 --rc geninfo_all_blocks=1 00:31:02.956 --rc geninfo_unexecuted_blocks=1 00:31:02.956 00:31:02.957 ' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.957 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:11.101 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:11.101 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:11.101 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:11.101 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:11.101 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.102 15:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:11.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:31:11.102 00:31:11.102 --- 10.0.0.2 ping statistics --- 00:31:11.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.102 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:31:11.102 00:31:11.102 --- 10.0.0.1 ping statistics --- 00:31:11.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.102 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=804111 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 804111 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 804111 ']' 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.102 15:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:11.102 [2024-11-20 15:40:59.243939] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:11.102 [2024-11-20 15:40:59.245079] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:31:11.102 [2024-11-20 15:40:59.245129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.102 [2024-11-20 15:40:59.344651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:11.102 [2024-11-20 15:40:59.396659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.102 [2024-11-20 15:40:59.396710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.102 [2024-11-20 15:40:59.396719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.102 [2024-11-20 15:40:59.396726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.102 [2024-11-20 15:40:59.396732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.102 [2024-11-20 15:40:59.398766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.102 [2024-11-20 15:40:59.398927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.102 [2024-11-20 15:40:59.398928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.102 [2024-11-20 15:40:59.475147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:11.102 [2024-11-20 15:40:59.476082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:11.102 [2024-11-20 15:40:59.476532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:11.102 [2024-11-20 15:40:59.476720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:11.102 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:11.102 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:11.102 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:11.102 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:11.102 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:11.363 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.363 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:11.363 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:11.363 [2024-11-20 15:41:00.263825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.363 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:11.624 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.886 [2024-11-20 15:41:00.660494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.886 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.146 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:12.146 Malloc0 00:31:12.146 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:12.407 Delay0 00:31:12.407 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.668 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:12.668 NULL1 00:31:12.929 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:12.929 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:12.929 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=804789 00:31:12.929 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:12.929 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.189 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.449 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:13.449 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:13.449 true 00:31:13.718 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:13.718 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.718 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.980 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:13.980 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:14.240 true 00:31:14.240 15:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:14.240 15:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.180 Read completed with error (sct=0, sc=11) 00:31:15.440 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.440 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:15.440 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:15.700 true 00:31:15.700 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:15.700 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:16.641 15:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:16.641 15:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:16.641 15:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:16.902 true 00:31:16.902 15:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:16.902 15:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.163 15:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.163 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:17.163 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:17.423 true 00:31:17.423 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:17.423 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.682 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.682 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:17.682 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:17.942 true 00:31:17.942 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:17.942 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.204 15:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.204 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:18.204 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:18.464 true 00:31:18.464 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:18.464 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.726 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.726 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:18.726 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:18.987 true 00:31:18.987 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:18.987 15:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.247 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.248 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:19.248 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:19.509 true 00:31:19.509 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:19.509 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.769 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.769 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:19.769 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:20.029 true 00:31:20.029 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:20.029 15:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.289 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.289 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:20.289 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:20.550 true 00:31:20.550 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:20.550 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.810 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.810 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:20.810 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:21.071 true 00:31:21.071 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:21.071 15:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.331 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.592 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:21.592 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:21.592 true 00:31:21.592 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:21.592 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.852 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.852 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:21.852 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:22.113 true 00:31:22.113 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:22.113 15:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.374 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.634 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:22.634 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:22.634 true 00:31:22.634 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:22.634 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.894 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.156 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:23.156 15:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:23.156 true 00:31:23.156 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:23.156 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.417 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.678 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:23.678 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:23.678 true 00:31:23.678 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:23.678 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.938 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.198 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:24.198 15:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:24.198 true 00:31:24.198 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:24.198 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.459 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.719 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:24.719 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:24.719 true 00:31:24.719 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:24.719 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.980 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.241 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:25.241 15:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:25.241 true 00:31:25.241 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:25.241 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.502 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.762 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:25.762 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:25.762 true 00:31:25.762 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:25.762 15:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.146 15:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.147 15:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:27.147 15:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:27.407 true 00:31:27.407 15:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:27.407 15:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.350 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.350 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:28.350 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:28.611 true 00:31:28.611 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:28.611 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.872 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.872 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:28.872 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:29.132 true 00:31:29.132 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:29.132 15:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.515 15:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.515 15:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:30.515 15:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:30.515 true 00:31:30.515 15:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:30.515 15:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.455 15:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:31.716 15:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:31.716 15:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:31.716 true 00:31:31.716 15:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:31.716 15:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.976 15:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.236 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:32.236 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:32.236 true 00:31:32.236 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:32.236 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.497 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.758 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:32.758 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:32.758 true 00:31:33.019 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:33.019 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.019 15:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.278 15:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:33.278 15:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:33.278 true 00:31:33.539 15:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:33.539 15:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.479 15:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.740 15:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:34.740 15:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:35.001 true 00:31:35.001 15:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:35.001 15:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.943 15:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.943 15:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:35.943 15:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:36.204 true 00:31:36.204 15:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:36.204 15:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.204 15:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.464 15:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:36.464 15:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:36.725 true 00:31:36.725 15:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:36.725 15:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.666 15:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.927 15:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:37.927 15:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:38.192 true 00:31:38.192 15:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:38.192 15:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.137 15:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.137 15:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:39.137 15:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:39.398 true 00:31:39.398 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:39.398 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.398 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.659 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:39.659 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:39.919 true 00:31:39.919 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:39.919 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.919 15:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.180 15:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:40.180 15:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:40.447 true 00:31:40.447 15:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:40.447 15:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.391 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.391 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:41.391 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:41.652 true 00:31:41.652 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:41.652 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.652 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.913 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:41.913 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:42.173 true 00:31:42.173 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:42.173 15:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.173 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.433 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:42.433 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:42.694 true 00:31:42.694 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:42.694 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.955 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.955 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:42.955 15:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:43.216 true 00:31:43.216 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:43.216 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.216 Initializing NVMe Controllers 00:31:43.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.216 Controller IO queue size 128, less than required. 00:31:43.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.216 Controller IO queue size 128, less than required. 00:31:43.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:43.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:43.216 Initialization complete. Launching workers. 00:31:43.216 ======================================================== 00:31:43.216 Latency(us) 00:31:43.216 Device Information : IOPS MiB/s Average min max 00:31:43.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1465.20 0.72 36596.77 1784.66 1088108.19 00:31:43.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12793.95 6.25 9971.47 1101.74 403451.57 00:31:43.216 ======================================================== 00:31:43.216 Total : 14259.15 6.96 12707.36 1101.74 1088108.19 00:31:43.216 00:31:43.477 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.477 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:43.477 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:43.738 true 00:31:43.738 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 804789 00:31:43.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (804789) - No such process 00:31:43.738 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 804789 00:31:43.738 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.998 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:43.998 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:43.998 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:43.998 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:43.998 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:43.998 15:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:44.258 null0 00:31:44.258 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:44.258 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:44.258 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:44.519 null1 00:31:44.519 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:44.519 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:44.519 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:44.519 null2 00:31:44.519 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:44.519 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:44.519 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:44.779 null3 00:31:44.779 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:44.779 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:44.779 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:45.040 null4 00:31:45.040 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:45.040 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:45.040 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:45.040 null5 00:31:45.040 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:45.040 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:45.040 15:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:45.300 null6 00:31:45.300 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:45.300 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:45.300 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:45.300 null7 00:31:45.300 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 810928 810929 810931 810933 810935 810937 810938 810940 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:45.561 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:45.823 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.085 15:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.085 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:46.346 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:46.607 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:46.607 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:46.607 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.607 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:46.608 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:46.869 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:47.130 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.130 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.130 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:47.130 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:47.130 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.130 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.131 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:47.131 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.131 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:47.131 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:47.131 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:47.131 15:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.131 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:47.421 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:47.736 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:47.737 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:47.737 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.052 15:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.313 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:48.574 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:48.835 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.096 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.096 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:49.357 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.358 rmmod nvme_tcp 00:31:49.358 rmmod nvme_fabrics 00:31:49.358 rmmod nvme_keyring 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 804111 ']' 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 804111 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 804111 ']' 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 804111 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.358 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804111 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804111' 00:31:49.619 killing process with pid 804111 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 804111 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 804111 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.619 15:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.165 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.165 00:31:52.165 real 0m49.136s 00:31:52.165 user 3m1.348s 00:31:52.165 sys 0m21.173s 00:31:52.165 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.165 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:52.165 ************************************ 00:31:52.165 END TEST nvmf_ns_hotplug_stress 00:31:52.165 ************************************ 00:31:52.165 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:52.165 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.165 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.166 ************************************ 00:31:52.166 START TEST nvmf_delete_subsystem 00:31:52.166 ************************************ 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:52.166 * Looking for test storage... 00:31:52.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.166 --rc genhtml_branch_coverage=1 00:31:52.166 --rc genhtml_function_coverage=1 00:31:52.166 --rc genhtml_legend=1 00:31:52.166 --rc geninfo_all_blocks=1 00:31:52.166 --rc geninfo_unexecuted_blocks=1 00:31:52.166 00:31:52.166 ' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.166 --rc genhtml_branch_coverage=1 00:31:52.166 --rc genhtml_function_coverage=1 00:31:52.166 --rc genhtml_legend=1 00:31:52.166 --rc geninfo_all_blocks=1 00:31:52.166 --rc geninfo_unexecuted_blocks=1 00:31:52.166 00:31:52.166 ' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.166 --rc genhtml_branch_coverage=1 00:31:52.166 --rc genhtml_function_coverage=1 00:31:52.166 --rc genhtml_legend=1 00:31:52.166 --rc geninfo_all_blocks=1 00:31:52.166 --rc geninfo_unexecuted_blocks=1 00:31:52.166 00:31:52.166 ' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.166 --rc genhtml_branch_coverage=1 00:31:52.166 --rc genhtml_function_coverage=1 00:31:52.166 --rc genhtml_legend=1 00:31:52.166 --rc geninfo_all_blocks=1 00:31:52.166 --rc geninfo_unexecuted_blocks=1 00:31:52.166 00:31:52.166 ' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.166 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.167 15:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.308 15:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.308 15:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.308 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:00.309 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:00.309 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:00.309 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:00.309 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.309 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:32:00.310 00:32:00.310 --- 10.0.0.2 ping statistics --- 00:32:00.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.310 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:32:00.310 00:32:00.310 --- 10.0.0.1 ping statistics --- 00:32:00.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.310 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=816094 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 816094 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 816094 ']' 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.310 15:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.310 [2024-11-20 15:41:48.409150] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.310 [2024-11-20 15:41:48.410302] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:00.310 [2024-11-20 15:41:48.410350] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.310 [2024-11-20 15:41:48.509069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:00.310 [2024-11-20 15:41:48.561065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.310 [2024-11-20 15:41:48.561114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.310 [2024-11-20 15:41:48.561123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.310 [2024-11-20 15:41:48.561131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.310 [2024-11-20 15:41:48.561137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.310 [2024-11-20 15:41:48.562770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.310 [2024-11-20 15:41:48.562774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.310 [2024-11-20 15:41:48.638593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.310 [2024-11-20 15:41:48.639114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.310 [2024-11-20 15:41:48.639455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.310 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.310 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:00.310 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.310 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.310 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 [2024-11-20 15:41:49.283793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 [2024-11-20 15:41:49.316323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 NULL1 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 Delay0 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=816196 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:00.572 15:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:00.572 [2024-11-20 15:41:49.448273] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:02.486 15:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:02.486 15:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.486 15:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 starting I/O failed: -6 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 [2024-11-20 15:41:51.614397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198680 is same with the state(6) to be set 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.747 Write completed with error (sct=0, sc=8) 00:32:02.747 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 starting I/O failed: -6 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 [2024-11-20 15:41:51.616582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd65400d490 is same with the state(6) to be set 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Write completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:02.748 Read completed with error (sct=0, sc=8) 00:32:03.691 [2024-11-20 15:41:52.587682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11999a0 is same with the state(6) to be set 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 [2024-11-20 15:41:52.616967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd65400d7c0 is same with the state(6) to be set 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 [2024-11-20 15:41:52.617447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd65400d020 is same with the state(6) to be set 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.691 Write completed with error (sct=0, sc=8) 00:32:03.691 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 [2024-11-20 15:41:52.618255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11984a0 is same with the state(6) to be set 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 Write completed with error (sct=0, sc=8) 00:32:03.692 Read completed with error (sct=0, sc=8) 00:32:03.692 [2024-11-20 15:41:52.618776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198860 is same with the state(6) to be set 00:32:03.692 Initializing NVMe Controllers 00:32:03.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:03.692 Controller IO queue size 128, less than required. 00:32:03.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:03.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:03.692 Initialization complete. Launching workers. 00:32:03.692 ======================================================== 00:32:03.692 Latency(us) 00:32:03.692 Device Information : IOPS MiB/s Average min max 00:32:03.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.00 0.08 893514.51 385.57 1010470.73 00:32:03.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.14 0.07 1005595.67 295.57 2001253.28 00:32:03.692 ======================================================== 00:32:03.692 Total : 316.13 0.15 945325.61 295.57 2001253.28 00:32:03.692 00:32:03.692 [2024-11-20 15:41:52.619179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11999a0 (9): Bad file descriptor 00:32:03.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:03.692 15:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.692 15:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:03.692 15:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 816196 00:32:03.692 15:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 816196 00:32:04.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (816196) - No such process 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 816196 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 816196 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 816196 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:04.263 [2024-11-20 15:41:53.152236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=816948 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:04.263 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:04.524 [2024-11-20 15:41:53.252388] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:04.785 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:04.785 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:04.785 15:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:05.353 15:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:05.353 15:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:05.353 15:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:05.923 15:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:05.923 15:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:05.923 15:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:06.493 15:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:06.493 15:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:06.493 15:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:06.755 15:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:06.755 15:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:06.755 15:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:07.324 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:07.324 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:07.324 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:07.584 Initializing NVMe Controllers 00:32:07.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:07.584 Controller IO queue size 128, less than required. 00:32:07.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:07.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:07.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:07.584 Initialization complete. Launching workers. 00:32:07.584 ======================================================== 00:32:07.584 Latency(us) 00:32:07.584 Device Information : IOPS MiB/s Average min max 00:32:07.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002396.45 1000132.95 1008269.77 00:32:07.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005125.69 1000212.44 1042680.06 00:32:07.584 ======================================================== 00:32:07.584 Total : 256.00 0.12 1003761.07 1000132.95 1042680.06 00:32:07.584 00:32:07.844 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 816948 00:32:07.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (816948) - No such process 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 816948 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.845 rmmod nvme_tcp 00:32:07.845 rmmod nvme_fabrics 00:32:07.845 rmmod nvme_keyring 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 816094 ']' 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 816094 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 816094 ']' 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 816094 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.845 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 816094 00:32:08.105 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.105 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.105 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 816094' 00:32:08.105 killing process with pid 816094 00:32:08.105 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 816094 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 816094 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.106 15:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.652 00:32:10.652 real 0m18.366s 00:32:10.652 user 0m26.804s 00:32:10.652 sys 0m7.467s 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.652 ************************************ 00:32:10.652 END TEST nvmf_delete_subsystem 00:32:10.652 ************************************ 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:10.652 ************************************ 00:32:10.652 START TEST nvmf_host_management 00:32:10.652 ************************************ 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:10.652 * Looking for test storage... 00:32:10.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:10.652 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:10.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.653 --rc genhtml_branch_coverage=1 00:32:10.653 --rc genhtml_function_coverage=1 00:32:10.653 --rc genhtml_legend=1 00:32:10.653 --rc geninfo_all_blocks=1 00:32:10.653 --rc geninfo_unexecuted_blocks=1 00:32:10.653 00:32:10.653 ' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:10.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.653 --rc genhtml_branch_coverage=1 00:32:10.653 --rc genhtml_function_coverage=1 00:32:10.653 --rc genhtml_legend=1 00:32:10.653 --rc geninfo_all_blocks=1 00:32:10.653 --rc geninfo_unexecuted_blocks=1 00:32:10.653 00:32:10.653 ' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:10.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.653 --rc genhtml_branch_coverage=1 00:32:10.653 --rc genhtml_function_coverage=1 00:32:10.653 --rc genhtml_legend=1 00:32:10.653 --rc geninfo_all_blocks=1 00:32:10.653 --rc geninfo_unexecuted_blocks=1 00:32:10.653 00:32:10.653 ' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:10.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.653 --rc genhtml_branch_coverage=1 00:32:10.653 --rc genhtml_function_coverage=1 00:32:10.653 --rc genhtml_legend=1 00:32:10.653 --rc geninfo_all_blocks=1 00:32:10.653 --rc geninfo_unexecuted_blocks=1 00:32:10.653 00:32:10.653 ' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.653 15:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:18.792 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:18.793 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:18.793 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:18.793 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:18.793 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.793 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:32:18.794 00:32:18.794 --- 10.0.0.2 ping statistics --- 00:32:18.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.794 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:32:18.794 00:32:18.794 --- 10.0.0.1 ping statistics --- 00:32:18.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.794 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=821903 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 821903 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 821903 ']' 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.794 15:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:18.794 [2024-11-20 15:42:06.927544] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:18.794 [2024-11-20 15:42:06.928652] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:18.794 [2024-11-20 15:42:06.928703] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.794 [2024-11-20 15:42:07.028875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:18.794 [2024-11-20 15:42:07.082308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.794 [2024-11-20 15:42:07.082362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.794 [2024-11-20 15:42:07.082371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.794 [2024-11-20 15:42:07.082378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.794 [2024-11-20 15:42:07.082384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.794 [2024-11-20 15:42:07.084395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.794 [2024-11-20 15:42:07.084562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.794 [2024-11-20 15:42:07.084724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.794 [2024-11-20 15:42:07.084724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:18.794 [2024-11-20 15:42:07.162858] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:18.794 [2024-11-20 15:42:07.163878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:18.794 [2024-11-20 15:42:07.164152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:18.794 [2024-11-20 15:42:07.164744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:18.794 [2024-11-20 15:42:07.164776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:18.794 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.794 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:18.794 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.794 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.794 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 [2024-11-20 15:42:07.789603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 Malloc0 00:32:19.056 [2024-11-20 15:42:07.893877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=822195 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 822195 /var/tmp/bdevperf.sock 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 822195 ']' 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:19.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:19.056 { 00:32:19.056 "params": { 00:32:19.056 "name": "Nvme$subsystem", 00:32:19.056 "trtype": "$TEST_TRANSPORT", 00:32:19.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:19.056 "adrfam": "ipv4", 00:32:19.056 "trsvcid": "$NVMF_PORT", 00:32:19.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:19.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:19.056 "hdgst": ${hdgst:-false}, 00:32:19.056 "ddgst": ${ddgst:-false} 00:32:19.056 }, 00:32:19.056 "method": "bdev_nvme_attach_controller" 00:32:19.056 } 00:32:19.056 EOF 00:32:19.056 )") 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:19.056 15:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:19.056 "params": { 00:32:19.056 "name": "Nvme0", 00:32:19.056 "trtype": "tcp", 00:32:19.056 "traddr": "10.0.0.2", 00:32:19.056 "adrfam": "ipv4", 00:32:19.056 "trsvcid": "4420", 00:32:19.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.056 "hdgst": false, 00:32:19.056 "ddgst": false 00:32:19.056 }, 00:32:19.056 "method": "bdev_nvme_attach_controller" 00:32:19.056 }' 00:32:19.056 [2024-11-20 15:42:08.006080] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:19.056 [2024-11-20 15:42:08.006169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822195 ] 00:32:19.318 [2024-11-20 15:42:08.100599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.318 [2024-11-20 15:42:08.153723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.579 Running I/O for 10 seconds... 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.154 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:20.154 [2024-11-20 15:42:08.897280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833f20 is same with the state(6) to be set 00:32:20.154 [2024-11-20 15:42:08.897350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833f20 is same with the state(6) to be set 00:32:20.154 [2024-11-20 15:42:08.897359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833f20 is same with the state(6) to be set 00:32:20.154 [2024-11-20 15:42:08.897574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.154 [2024-11-20 15:42:08.897659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.154 [2024-11-20 15:42:08.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.154 [2024-11-20 15:42:08.897709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.154 [2024-11-20 15:42:08.897727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.154 [2024-11-20 15:42:08.897745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.154 [2024-11-20 15:42:08.897762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.154 [2024-11-20 15:42:08.897769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.897986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.897996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.155 [2024-11-20 15:42:08.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.155 [2024-11-20 15:42:08.898489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.898789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.156 [2024-11-20 15:42:08.898796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.900135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:20.156 task offset: 98048 on job bdev=Nvme0n1 fails 00:32:20.156 00:32:20.156 Latency(us) 00:32:20.156 [2024-11-20T14:42:09.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.156 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:20.156 Job: Nvme0n1 ended in about 0.54 seconds with error 00:32:20.156 Verification LBA range: start 0x0 length 0x400 00:32:20.156 Nvme0n1 : 0.54 1306.18 81.64 118.74 0.00 43814.42 1693.01 37137.07 00:32:20.156 [2024-11-20T14:42:09.116Z] =================================================================================================================== 00:32:20.156 [2024-11-20T14:42:09.116Z] Total : 1306.18 81.64 118.74 0.00 43814.42 1693.01 37137.07 00:32:20.156 [2024-11-20 15:42:08.902404] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:20.156 [2024-11-20 15:42:08.902446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214e000 (9): Bad file descriptor 00:32:20.156 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.156 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:20.156 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.156 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:20.156 [2024-11-20 15:42:08.903728] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:20.156 [2024-11-20 15:42:08.903819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:20.156 [2024-11-20 15:42:08.903850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.156 [2024-11-20 15:42:08.903868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:20.156 [2024-11-20 15:42:08.903877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:20.156 [2024-11-20 15:42:08.903886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.156 [2024-11-20 15:42:08.903894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x214e000 00:32:20.156 [2024-11-20 15:42:08.903918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214e000 (9): Bad file descriptor 00:32:20.156 [2024-11-20 15:42:08.903933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:20.156 [2024-11-20 15:42:08.903942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:20.156 [2024-11-20 15:42:08.903952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:20.156 [2024-11-20 15:42:08.903970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:20.156 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.156 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 822195 00:32:21.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (822195) - No such process 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:21.109 { 00:32:21.109 "params": { 00:32:21.109 "name": "Nvme$subsystem", 00:32:21.109 "trtype": "$TEST_TRANSPORT", 00:32:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.109 "adrfam": "ipv4", 00:32:21.109 "trsvcid": "$NVMF_PORT", 00:32:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.109 "hdgst": ${hdgst:-false}, 00:32:21.109 "ddgst": ${ddgst:-false} 00:32:21.109 }, 00:32:21.109 "method": "bdev_nvme_attach_controller" 00:32:21.109 } 00:32:21.109 EOF 00:32:21.109 )") 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:21.109 15:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:21.109 "params": { 00:32:21.109 "name": "Nvme0", 00:32:21.109 "trtype": "tcp", 00:32:21.109 "traddr": "10.0.0.2", 00:32:21.109 "adrfam": "ipv4", 00:32:21.109 "trsvcid": "4420", 00:32:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.109 "hdgst": false, 00:32:21.109 "ddgst": false 00:32:21.109 }, 00:32:21.109 "method": "bdev_nvme_attach_controller" 00:32:21.109 }' 00:32:21.109 [2024-11-20 15:42:09.979045] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:21.109 [2024-11-20 15:42:09.979118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822846 ] 00:32:21.370 [2024-11-20 15:42:10.073842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.370 [2024-11-20 15:42:10.128054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.370 Running I/O for 1 seconds... 00:32:22.620 1477.00 IOPS, 92.31 MiB/s 00:32:22.620 Latency(us) 00:32:22.620 [2024-11-20T14:42:11.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.620 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:22.620 Verification LBA range: start 0x0 length 0x400 00:32:22.620 Nvme0n1 : 1.05 1472.26 92.02 0.00 0.00 41180.02 2034.35 41943.04 00:32:22.620 [2024-11-20T14:42:11.580Z] =================================================================================================================== 00:32:22.620 [2024-11-20T14:42:11.580Z] Total : 1472.26 92.02 0.00 0.00 41180.02 2034.35 41943.04 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.620 rmmod nvme_tcp 00:32:22.620 rmmod nvme_fabrics 00:32:22.620 rmmod nvme_keyring 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 821903 ']' 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 821903 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 821903 ']' 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 821903 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.620 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821903 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821903' 00:32:22.880 killing process with pid 821903 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 821903 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 821903 00:32:22.880 [2024-11-20 15:42:11.716897] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:22.880 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:22.881 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.881 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.881 15:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:25.424 00:32:25.424 real 0m14.728s 00:32:25.424 user 0m19.475s 00:32:25.424 sys 0m7.402s 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:25.424 ************************************ 00:32:25.424 END TEST nvmf_host_management 00:32:25.424 ************************************ 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.424 ************************************ 00:32:25.424 START TEST nvmf_lvol 00:32:25.424 ************************************ 00:32:25.424 15:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:25.424 * Looking for test storage... 00:32:25.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.424 --rc genhtml_branch_coverage=1 00:32:25.424 --rc genhtml_function_coverage=1 00:32:25.424 --rc genhtml_legend=1 00:32:25.424 --rc geninfo_all_blocks=1 00:32:25.424 --rc geninfo_unexecuted_blocks=1 00:32:25.424 00:32:25.424 ' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.424 --rc genhtml_branch_coverage=1 00:32:25.424 --rc genhtml_function_coverage=1 00:32:25.424 --rc genhtml_legend=1 00:32:25.424 --rc geninfo_all_blocks=1 00:32:25.424 --rc geninfo_unexecuted_blocks=1 00:32:25.424 00:32:25.424 ' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.424 --rc genhtml_branch_coverage=1 00:32:25.424 --rc genhtml_function_coverage=1 00:32:25.424 --rc genhtml_legend=1 00:32:25.424 --rc geninfo_all_blocks=1 00:32:25.424 --rc geninfo_unexecuted_blocks=1 00:32:25.424 00:32:25.424 ' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.424 --rc genhtml_branch_coverage=1 00:32:25.424 --rc genhtml_function_coverage=1 00:32:25.424 --rc genhtml_legend=1 00:32:25.424 --rc geninfo_all_blocks=1 00:32:25.424 --rc geninfo_unexecuted_blocks=1 00:32:25.424 00:32:25.424 ' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.424 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.425 15:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:33.563 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.563 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:33.564 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:33.564 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:33.564 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:32:33.564 00:32:33.564 --- 10.0.0.2 ping statistics --- 00:32:33.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.564 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:32:33.564 00:32:33.564 --- 10.0.0.1 ping statistics --- 00:32:33.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.564 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=827424 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 827424 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 827424 ']' 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.564 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:33.564 [2024-11-20 15:42:21.719400] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:33.564 [2024-11-20 15:42:21.720522] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:33.564 [2024-11-20 15:42:21.720575] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.564 [2024-11-20 15:42:21.820445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:33.564 [2024-11-20 15:42:21.873548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.564 [2024-11-20 15:42:21.873597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.564 [2024-11-20 15:42:21.873606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.564 [2024-11-20 15:42:21.873613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.564 [2024-11-20 15:42:21.873620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.565 [2024-11-20 15:42:21.875509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.565 [2024-11-20 15:42:21.875670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.565 [2024-11-20 15:42:21.875672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.565 [2024-11-20 15:42:21.954080] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:33.565 [2024-11-20 15:42:21.955153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:33.565 [2024-11-20 15:42:21.955570] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:33.565 [2024-11-20 15:42:21.955696] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:33.825 [2024-11-20 15:42:22.732603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.825 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:34.085 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:34.085 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:34.345 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:34.345 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:34.606 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:34.866 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=17332a1c-77ac-400d-bb32-770d17ef93b1 00:32:34.866 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 17332a1c-77ac-400d-bb32-770d17ef93b1 lvol 20 00:32:34.866 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=62d7e9e9-07c8-4ab9-b010-393c7d63f704 00:32:34.866 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:35.127 15:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 62d7e9e9-07c8-4ab9-b010-393c7d63f704 00:32:35.388 15:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.388 [2024-11-20 15:42:24.292513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.388 15:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:35.649 15:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:35.650 15:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=828083 00:32:35.650 15:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:36.590 15:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 62d7e9e9-07c8-4ab9-b010-393c7d63f704 MY_SNAPSHOT 00:32:36.850 15:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1cc03c3d-b09c-4431-a6fc-0da449e204a7 00:32:36.850 15:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 62d7e9e9-07c8-4ab9-b010-393c7d63f704 30 00:32:37.112 15:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1cc03c3d-b09c-4431-a6fc-0da449e204a7 MY_CLONE 00:32:37.373 15:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=89cabb27-7dd4-474c-a9b9-eef798f3db71 00:32:37.373 15:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 89cabb27-7dd4-474c-a9b9-eef798f3db71 00:32:37.944 15:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 828083 00:32:46.079 Initializing NVMe Controllers 00:32:46.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:46.079 Controller IO queue size 128, less than required. 00:32:46.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:46.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:46.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:46.079 Initialization complete. Launching workers. 00:32:46.079 ======================================================== 00:32:46.079 Latency(us) 00:32:46.079 Device Information : IOPS MiB/s Average min max 00:32:46.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15250.90 59.57 8394.81 1797.57 63566.84 00:32:46.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15374.20 60.06 8326.01 4217.30 57749.65 00:32:46.079 ======================================================== 00:32:46.079 Total : 30625.10 119.63 8360.27 1797.57 63566.84 00:32:46.079 00:32:46.079 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.340 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62d7e9e9-07c8-4ab9-b010-393c7d63f704 00:32:46.340 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17332a1c-77ac-400d-bb32-770d17ef93b1 00:32:46.600 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.601 rmmod nvme_tcp 00:32:46.601 rmmod nvme_fabrics 00:32:46.601 rmmod nvme_keyring 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 827424 ']' 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 827424 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 827424 ']' 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 827424 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 827424 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 827424' 00:32:46.601 killing process with pid 827424 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 827424 00:32:46.601 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 827424 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.862 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:48.922 00:32:48.922 real 0m23.830s 00:32:48.922 user 0m55.838s 00:32:48.922 sys 0m10.786s 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:48.922 ************************************ 00:32:48.922 END TEST nvmf_lvol 00:32:48.922 ************************************ 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:48.922 ************************************ 00:32:48.922 START TEST nvmf_lvs_grow 00:32:48.922 ************************************ 00:32:48.922 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:49.183 * Looking for test storage... 00:32:49.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.183 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.183 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.183 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.183 --rc genhtml_branch_coverage=1 00:32:49.183 --rc genhtml_function_coverage=1 00:32:49.183 --rc genhtml_legend=1 00:32:49.183 --rc geninfo_all_blocks=1 00:32:49.183 --rc geninfo_unexecuted_blocks=1 00:32:49.183 00:32:49.183 ' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.183 --rc genhtml_branch_coverage=1 00:32:49.183 --rc genhtml_function_coverage=1 00:32:49.183 --rc genhtml_legend=1 00:32:49.183 --rc geninfo_all_blocks=1 00:32:49.183 --rc geninfo_unexecuted_blocks=1 00:32:49.183 00:32:49.183 ' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.183 --rc genhtml_branch_coverage=1 00:32:49.183 --rc genhtml_function_coverage=1 00:32:49.183 --rc genhtml_legend=1 00:32:49.183 --rc geninfo_all_blocks=1 00:32:49.183 --rc geninfo_unexecuted_blocks=1 00:32:49.183 00:32:49.183 ' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.183 --rc genhtml_branch_coverage=1 00:32:49.183 --rc genhtml_function_coverage=1 00:32:49.183 --rc genhtml_legend=1 00:32:49.183 --rc geninfo_all_blocks=1 00:32:49.183 --rc geninfo_unexecuted_blocks=1 00:32:49.183 00:32:49.183 ' 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.183 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.184 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:57.318 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:57.318 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.318 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:57.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:57.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:32:57.319 00:32:57.319 --- 10.0.0.2 ping statistics --- 00:32:57.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.319 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:32:57.319 00:32:57.319 --- 10.0.0.1 ping statistics --- 00:32:57.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.319 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=834148 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 834148 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 834148 ']' 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.319 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.319 [2024-11-20 15:42:45.639580] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.319 [2024-11-20 15:42:45.640713] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:57.319 [2024-11-20 15:42:45.640765] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.319 [2024-11-20 15:42:45.741168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.319 [2024-11-20 15:42:45.792824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.319 [2024-11-20 15:42:45.792875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.319 [2024-11-20 15:42:45.792884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.319 [2024-11-20 15:42:45.792892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.319 [2024-11-20 15:42:45.792898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.319 [2024-11-20 15:42:45.793687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.319 [2024-11-20 15:42:45.870079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.319 [2024-11-20 15:42:45.870380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.579 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:57.840 [2024-11-20 15:42:46.682589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.840 ************************************ 00:32:57.840 START TEST lvs_grow_clean 00:32:57.840 ************************************ 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:57.840 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:58.101 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:58.101 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:58.361 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:32:58.361 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:32:58.361 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:58.621 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:58.621 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:58.621 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 lvol 150 00:32:58.621 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=16661303-3079-4451-9d31-20b914d519a4 00:32:58.622 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:58.622 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:58.883 [2024-11-20 15:42:47.714315] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:58.883 [2024-11-20 15:42:47.714485] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:58.883 true 00:32:58.883 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:32:58.883 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:59.144 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:59.144 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:59.404 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16661303-3079-4451-9d31-20b914d519a4 00:32:59.404 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:59.664 [2024-11-20 15:42:48.443001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.664 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=834851 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 834851 /var/tmp/bdevperf.sock 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 834851 ']' 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:59.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.924 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:59.924 [2024-11-20 15:42:48.682613] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:32:59.924 [2024-11-20 15:42:48.682685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834851 ] 00:32:59.924 [2024-11-20 15:42:48.776007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.924 [2024-11-20 15:42:48.828271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.864 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.864 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:00.864 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:00.864 Nvme0n1 00:33:00.864 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:01.125 [ 00:33:01.125 { 00:33:01.125 "name": "Nvme0n1", 00:33:01.125 "aliases": [ 00:33:01.125 "16661303-3079-4451-9d31-20b914d519a4" 00:33:01.125 ], 00:33:01.125 "product_name": "NVMe disk", 00:33:01.125 "block_size": 4096, 00:33:01.125 "num_blocks": 38912, 00:33:01.125 "uuid": "16661303-3079-4451-9d31-20b914d519a4", 00:33:01.125 "numa_id": 0, 00:33:01.125 "assigned_rate_limits": { 00:33:01.125 "rw_ios_per_sec": 0, 00:33:01.125 "rw_mbytes_per_sec": 0, 00:33:01.125 "r_mbytes_per_sec": 0, 00:33:01.125 "w_mbytes_per_sec": 0 00:33:01.125 }, 00:33:01.125 "claimed": false, 00:33:01.125 "zoned": false, 00:33:01.125 "supported_io_types": { 00:33:01.125 "read": true, 00:33:01.125 "write": true, 00:33:01.125 "unmap": true, 00:33:01.125 "flush": true, 00:33:01.125 "reset": true, 00:33:01.125 "nvme_admin": true, 00:33:01.125 "nvme_io": true, 00:33:01.125 "nvme_io_md": false, 00:33:01.125 "write_zeroes": true, 00:33:01.125 "zcopy": false, 00:33:01.125 "get_zone_info": false, 00:33:01.125 "zone_management": false, 00:33:01.125 "zone_append": false, 00:33:01.125 "compare": true, 00:33:01.125 "compare_and_write": true, 00:33:01.125 "abort": true, 00:33:01.125 "seek_hole": false, 00:33:01.125 "seek_data": false, 00:33:01.125 "copy": true, 00:33:01.125 "nvme_iov_md": false 00:33:01.125 }, 00:33:01.125 "memory_domains": [ 00:33:01.125 { 00:33:01.125 "dma_device_id": "system", 00:33:01.125 "dma_device_type": 1 00:33:01.125 } 00:33:01.125 ], 00:33:01.125 "driver_specific": { 00:33:01.125 "nvme": [ 00:33:01.125 { 00:33:01.125 "trid": { 00:33:01.125 "trtype": "TCP", 00:33:01.125 "adrfam": "IPv4", 00:33:01.125 "traddr": "10.0.0.2", 00:33:01.125 "trsvcid": "4420", 00:33:01.125 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:01.125 }, 00:33:01.125 "ctrlr_data": { 00:33:01.125 "cntlid": 1, 00:33:01.125 "vendor_id": "0x8086", 00:33:01.125 "model_number": "SPDK bdev Controller", 00:33:01.125 "serial_number": "SPDK0", 00:33:01.125 "firmware_revision": "25.01", 00:33:01.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.125 "oacs": { 00:33:01.125 "security": 0, 00:33:01.125 "format": 0, 00:33:01.125 "firmware": 0, 00:33:01.125 "ns_manage": 0 00:33:01.125 }, 00:33:01.125 "multi_ctrlr": true, 00:33:01.125 "ana_reporting": false 00:33:01.125 }, 00:33:01.125 "vs": { 00:33:01.125 "nvme_version": "1.3" 00:33:01.125 }, 00:33:01.125 "ns_data": { 00:33:01.125 "id": 1, 00:33:01.125 "can_share": true 00:33:01.125 } 00:33:01.125 } 00:33:01.125 ], 00:33:01.125 "mp_policy": "active_passive" 00:33:01.125 } 00:33:01.125 } 00:33:01.125 ] 00:33:01.125 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=835166 00:33:01.125 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:01.125 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:01.125 Running I/O for 10 seconds... 00:33:02.507 Latency(us) 00:33:02.507 [2024-11-20T14:42:51.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.507 Nvme0n1 : 1.00 16347.00 63.86 0.00 0.00 0.00 0.00 0.00 00:33:02.507 [2024-11-20T14:42:51.467Z] =================================================================================================================== 00:33:02.507 [2024-11-20T14:42:51.467Z] Total : 16347.00 63.86 0.00 0.00 0.00 0.00 0.00 00:33:02.507 00:33:03.077 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:03.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.338 Nvme0n1 : 2.00 16549.50 64.65 0.00 0.00 0.00 0.00 0.00 00:33:03.338 [2024-11-20T14:42:52.298Z] =================================================================================================================== 00:33:03.338 [2024-11-20T14:42:52.298Z] Total : 16549.50 64.65 0.00 0.00 0.00 0.00 0.00 00:33:03.338 00:33:03.338 true 00:33:03.338 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:03.338 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:03.597 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:03.597 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:03.597 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 835166 00:33:04.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.167 Nvme0n1 : 3.00 16675.67 65.14 0.00 0.00 0.00 0.00 0.00 00:33:04.167 [2024-11-20T14:42:53.127Z] =================================================================================================================== 00:33:04.167 [2024-11-20T14:42:53.127Z] Total : 16675.67 65.14 0.00 0.00 0.00 0.00 0.00 00:33:04.167 00:33:05.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.548 Nvme0n1 : 4.00 16898.75 66.01 0.00 0.00 0.00 0.00 0.00 00:33:05.548 [2024-11-20T14:42:54.508Z] =================================================================================================================== 00:33:05.548 [2024-11-20T14:42:54.508Z] Total : 16898.75 66.01 0.00 0.00 0.00 0.00 0.00 00:33:05.548 00:33:06.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.489 Nvme0n1 : 5.00 17848.60 69.72 0.00 0.00 0.00 0.00 0.00 00:33:06.489 [2024-11-20T14:42:55.449Z] =================================================================================================================== 00:33:06.489 [2024-11-20T14:42:55.449Z] Total : 17848.60 69.72 0.00 0.00 0.00 0.00 0.00 00:33:06.489 00:33:07.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.430 Nvme0n1 : 6.00 18999.17 74.22 0.00 0.00 0.00 0.00 0.00 00:33:07.430 [2024-11-20T14:42:56.390Z] =================================================================================================================== 00:33:07.430 [2024-11-20T14:42:56.390Z] Total : 18999.17 74.22 0.00 0.00 0.00 0.00 0.00 00:33:07.430 00:33:08.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:08.370 Nvme0n1 : 7.00 19827.86 77.45 0.00 0.00 0.00 0.00 0.00 00:33:08.370 [2024-11-20T14:42:57.330Z] =================================================================================================================== 00:33:08.370 [2024-11-20T14:42:57.330Z] Total : 19827.86 77.45 0.00 0.00 0.00 0.00 0.00 00:33:08.370 00:33:09.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:09.323 Nvme0n1 : 8.00 20453.38 79.90 0.00 0.00 0.00 0.00 0.00 00:33:09.323 [2024-11-20T14:42:58.283Z] =================================================================================================================== 00:33:09.323 [2024-11-20T14:42:58.283Z] Total : 20453.38 79.90 0.00 0.00 0.00 0.00 0.00 00:33:09.323 00:33:10.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:10.263 Nvme0n1 : 9.00 20938.11 81.79 0.00 0.00 0.00 0.00 0.00 00:33:10.263 [2024-11-20T14:42:59.223Z] =================================================================================================================== 00:33:10.263 [2024-11-20T14:42:59.223Z] Total : 20938.11 81.79 0.00 0.00 0.00 0.00 0.00 00:33:10.263 00:33:11.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:11.203 Nvme0n1 : 10.00 21325.90 83.30 0.00 0.00 0.00 0.00 0.00 00:33:11.203 [2024-11-20T14:43:00.163Z] =================================================================================================================== 00:33:11.203 [2024-11-20T14:43:00.163Z] Total : 21325.90 83.30 0.00 0.00 0.00 0.00 0.00 00:33:11.203 00:33:11.203 00:33:11.203 Latency(us) 00:33:11.203 [2024-11-20T14:43:00.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:11.203 Nvme0n1 : 10.01 21327.67 83.31 0.00 0.00 5997.64 3932.16 23592.96 00:33:11.203 [2024-11-20T14:43:00.163Z] =================================================================================================================== 00:33:11.203 [2024-11-20T14:43:00.163Z] Total : 21327.67 83.31 0.00 0.00 5997.64 3932.16 23592.96 00:33:11.203 { 00:33:11.203 "results": [ 00:33:11.203 { 00:33:11.203 "job": "Nvme0n1", 00:33:11.203 "core_mask": "0x2", 00:33:11.203 "workload": "randwrite", 00:33:11.203 "status": "finished", 00:33:11.203 "queue_depth": 128, 00:33:11.203 "io_size": 4096, 00:33:11.203 "runtime": 10.00517, 00:33:11.203 "iops": 21327.673592752548, 00:33:11.203 "mibps": 83.31122497168964, 00:33:11.203 "io_failed": 0, 00:33:11.203 "io_timeout": 0, 00:33:11.203 "avg_latency_us": 5997.642002183826, 00:33:11.203 "min_latency_us": 3932.16, 00:33:11.203 "max_latency_us": 23592.96 00:33:11.203 } 00:33:11.203 ], 00:33:11.203 "core_count": 1 00:33:11.203 } 00:33:11.203 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 834851 00:33:11.203 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 834851 ']' 00:33:11.203 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 834851 00:33:11.203 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:11.203 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:11.203 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 834851 00:33:11.464 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:11.464 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:11.464 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 834851' 00:33:11.464 killing process with pid 834851 00:33:11.464 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 834851 00:33:11.464 Received shutdown signal, test time was about 10.000000 seconds 00:33:11.464 00:33:11.464 Latency(us) 00:33:11.464 [2024-11-20T14:43:00.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.464 [2024-11-20T14:43:00.424Z] =================================================================================================================== 00:33:11.464 [2024-11-20T14:43:00.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.464 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 834851 00:33:11.464 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:11.725 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:11.985 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:11.986 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:11.986 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:11.986 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:11.986 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:12.246 [2024-11-20 15:43:01.062336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:12.246 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:12.507 request: 00:33:12.507 { 00:33:12.507 "uuid": "0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7", 00:33:12.507 "method": "bdev_lvol_get_lvstores", 00:33:12.507 "req_id": 1 00:33:12.507 } 00:33:12.507 Got JSON-RPC error response 00:33:12.507 response: 00:33:12.507 { 00:33:12.507 "code": -19, 00:33:12.507 "message": "No such device" 00:33:12.507 } 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:12.507 aio_bdev 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 16661303-3079-4451-9d31-20b914d519a4 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=16661303-3079-4451-9d31-20b914d519a4 00:33:12.507 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:12.508 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:12.508 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:12.508 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:12.508 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:12.768 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16661303-3079-4451-9d31-20b914d519a4 -t 2000 00:33:13.029 [ 00:33:13.029 { 00:33:13.029 "name": "16661303-3079-4451-9d31-20b914d519a4", 00:33:13.029 "aliases": [ 00:33:13.029 "lvs/lvol" 00:33:13.029 ], 00:33:13.029 "product_name": "Logical Volume", 00:33:13.029 "block_size": 4096, 00:33:13.029 "num_blocks": 38912, 00:33:13.029 "uuid": "16661303-3079-4451-9d31-20b914d519a4", 00:33:13.029 "assigned_rate_limits": { 00:33:13.029 "rw_ios_per_sec": 0, 00:33:13.029 "rw_mbytes_per_sec": 0, 00:33:13.029 "r_mbytes_per_sec": 0, 00:33:13.029 "w_mbytes_per_sec": 0 00:33:13.029 }, 00:33:13.029 "claimed": false, 00:33:13.029 "zoned": false, 00:33:13.029 "supported_io_types": { 00:33:13.029 "read": true, 00:33:13.029 "write": true, 00:33:13.029 "unmap": true, 00:33:13.029 "flush": false, 00:33:13.029 "reset": true, 00:33:13.029 "nvme_admin": false, 00:33:13.029 "nvme_io": false, 00:33:13.029 "nvme_io_md": false, 00:33:13.029 "write_zeroes": true, 00:33:13.029 "zcopy": false, 00:33:13.029 "get_zone_info": false, 00:33:13.029 "zone_management": false, 00:33:13.029 "zone_append": false, 00:33:13.029 "compare": false, 00:33:13.029 "compare_and_write": false, 00:33:13.029 "abort": false, 00:33:13.029 "seek_hole": true, 00:33:13.029 "seek_data": true, 00:33:13.029 "copy": false, 00:33:13.029 "nvme_iov_md": false 00:33:13.029 }, 00:33:13.029 "driver_specific": { 00:33:13.029 "lvol": { 00:33:13.029 "lvol_store_uuid": "0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7", 00:33:13.029 "base_bdev": "aio_bdev", 00:33:13.029 "thin_provision": false, 00:33:13.029 "num_allocated_clusters": 38, 00:33:13.029 "snapshot": false, 00:33:13.029 "clone": false, 00:33:13.029 "esnap_clone": false 00:33:13.029 } 00:33:13.029 } 00:33:13.029 } 00:33:13.029 ] 00:33:13.029 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:13.029 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:13.029 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:13.291 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:13.291 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:13.291 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:13.291 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:13.291 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16661303-3079-4451-9d31-20b914d519a4 00:33:13.552 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ae1f1d8-fcee-44d7-bf64-3e8dd7f79eb7 00:33:13.812 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:13.812 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.073 00:33:14.073 real 0m16.019s 00:33:14.073 user 0m15.564s 00:33:14.073 sys 0m1.564s 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:14.073 ************************************ 00:33:14.073 END TEST lvs_grow_clean 00:33:14.073 ************************************ 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:14.073 ************************************ 00:33:14.073 START TEST lvs_grow_dirty 00:33:14.073 ************************************ 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.073 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:14.334 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:14.334 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:14.334 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:14.334 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:14.334 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:14.594 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:14.594 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:14.594 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 lvol 150 00:33:14.855 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:14.855 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.855 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:14.855 [2024-11-20 15:43:03.770290] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:14.855 [2024-11-20 15:43:03.770452] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:14.855 true 00:33:14.855 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:14.855 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:15.116 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:15.116 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:15.377 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:15.377 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.638 [2024-11-20 15:43:04.478920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.638 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=837923 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 837923 /var/tmp/bdevperf.sock 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 837923 ']' 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:15.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.898 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:15.898 [2024-11-20 15:43:04.718171] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:15.898 [2024-11-20 15:43:04.718242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837923 ] 00:33:15.898 [2024-11-20 15:43:04.805566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.898 [2024-11-20 15:43:04.839814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.838 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.838 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:16.838 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:16.838 Nvme0n1 00:33:16.838 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:17.098 [ 00:33:17.098 { 00:33:17.098 "name": "Nvme0n1", 00:33:17.099 "aliases": [ 00:33:17.099 "f76535e7-add1-4f10-a0f7-40076c5a5084" 00:33:17.099 ], 00:33:17.099 "product_name": "NVMe disk", 00:33:17.099 "block_size": 4096, 00:33:17.099 "num_blocks": 38912, 00:33:17.099 "uuid": "f76535e7-add1-4f10-a0f7-40076c5a5084", 00:33:17.099 "numa_id": 0, 00:33:17.099 "assigned_rate_limits": { 00:33:17.099 "rw_ios_per_sec": 0, 00:33:17.099 "rw_mbytes_per_sec": 0, 00:33:17.099 "r_mbytes_per_sec": 0, 00:33:17.099 "w_mbytes_per_sec": 0 00:33:17.099 }, 00:33:17.099 "claimed": false, 00:33:17.099 "zoned": false, 00:33:17.099 "supported_io_types": { 00:33:17.099 "read": true, 00:33:17.099 "write": true, 00:33:17.099 "unmap": true, 00:33:17.099 "flush": true, 00:33:17.099 "reset": true, 00:33:17.099 "nvme_admin": true, 00:33:17.099 "nvme_io": true, 00:33:17.099 "nvme_io_md": false, 00:33:17.099 "write_zeroes": true, 00:33:17.099 "zcopy": false, 00:33:17.099 "get_zone_info": false, 00:33:17.099 "zone_management": false, 00:33:17.099 "zone_append": false, 00:33:17.099 "compare": true, 00:33:17.099 "compare_and_write": true, 00:33:17.099 "abort": true, 00:33:17.099 "seek_hole": false, 00:33:17.099 "seek_data": false, 00:33:17.099 "copy": true, 00:33:17.099 "nvme_iov_md": false 00:33:17.099 }, 00:33:17.099 "memory_domains": [ 00:33:17.099 { 00:33:17.099 "dma_device_id": "system", 00:33:17.099 "dma_device_type": 1 00:33:17.099 } 00:33:17.099 ], 00:33:17.099 "driver_specific": { 00:33:17.099 "nvme": [ 00:33:17.099 { 00:33:17.099 "trid": { 00:33:17.099 "trtype": "TCP", 00:33:17.099 "adrfam": "IPv4", 00:33:17.099 "traddr": "10.0.0.2", 00:33:17.099 "trsvcid": "4420", 00:33:17.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:17.099 }, 00:33:17.099 "ctrlr_data": { 00:33:17.099 "cntlid": 1, 00:33:17.099 "vendor_id": "0x8086", 00:33:17.099 "model_number": "SPDK bdev Controller", 00:33:17.099 "serial_number": "SPDK0", 00:33:17.099 "firmware_revision": "25.01", 00:33:17.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.099 "oacs": { 00:33:17.099 "security": 0, 00:33:17.099 "format": 0, 00:33:17.099 "firmware": 0, 00:33:17.099 "ns_manage": 0 00:33:17.099 }, 00:33:17.099 "multi_ctrlr": true, 00:33:17.099 "ana_reporting": false 00:33:17.099 }, 00:33:17.099 "vs": { 00:33:17.099 "nvme_version": "1.3" 00:33:17.099 }, 00:33:17.099 "ns_data": { 00:33:17.099 "id": 1, 00:33:17.099 "can_share": true 00:33:17.099 } 00:33:17.099 } 00:33:17.099 ], 00:33:17.099 "mp_policy": "active_passive" 00:33:17.099 } 00:33:17.099 } 00:33:17.099 ] 00:33:17.099 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=838101 00:33:17.099 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:17.099 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:17.099 Running I/O for 10 seconds... 00:33:18.040 Latency(us) 00:33:18.040 [2024-11-20T14:43:07.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.040 Nvme0n1 : 1.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:33:18.040 [2024-11-20T14:43:07.000Z] =================================================================================================================== 00:33:18.040 [2024-11-20T14:43:07.000Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:33:18.040 00:33:18.981 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:19.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.242 Nvme0n1 : 2.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:33:19.242 [2024-11-20T14:43:08.202Z] =================================================================================================================== 00:33:19.242 [2024-11-20T14:43:08.202Z] Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:33:19.242 00:33:19.242 true 00:33:19.242 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:19.242 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:19.501 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:19.501 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:19.501 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 838101 00:33:20.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.071 Nvme0n1 : 3.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:20.071 [2024-11-20T14:43:09.031Z] =================================================================================================================== 00:33:20.071 [2024-11-20T14:43:09.031Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:20.071 00:33:21.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.454 Nvme0n1 : 4.00 17811.75 69.58 0.00 0.00 0.00 0.00 0.00 00:33:21.454 [2024-11-20T14:43:10.414Z] =================================================================================================================== 00:33:21.454 [2024-11-20T14:43:10.414Z] Total : 17811.75 69.58 0.00 0.00 0.00 0.00 0.00 00:33:21.454 00:33:22.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.394 Nvme0n1 : 5.00 18618.20 72.73 0.00 0.00 0.00 0.00 0.00 00:33:22.394 [2024-11-20T14:43:11.354Z] =================================================================================================================== 00:33:22.394 [2024-11-20T14:43:11.354Z] Total : 18618.20 72.73 0.00 0.00 0.00 0.00 0.00 00:33:22.394 00:33:23.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.333 Nvme0n1 : 6.00 19738.00 77.10 0.00 0.00 0.00 0.00 0.00 00:33:23.333 [2024-11-20T14:43:12.293Z] =================================================================================================================== 00:33:23.333 [2024-11-20T14:43:12.293Z] Total : 19738.00 77.10 0.00 0.00 0.00 0.00 0.00 00:33:23.333 00:33:24.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:24.275 Nvme0n1 : 7.00 20544.71 80.25 0.00 0.00 0.00 0.00 0.00 00:33:24.275 [2024-11-20T14:43:13.235Z] =================================================================================================================== 00:33:24.275 [2024-11-20T14:43:13.235Z] Total : 20544.71 80.25 0.00 0.00 0.00 0.00 0.00 00:33:24.275 00:33:25.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:25.215 Nvme0n1 : 8.00 21151.62 82.62 0.00 0.00 0.00 0.00 0.00 00:33:25.215 [2024-11-20T14:43:14.175Z] =================================================================================================================== 00:33:25.215 [2024-11-20T14:43:14.175Z] Total : 21151.62 82.62 0.00 0.00 0.00 0.00 0.00 00:33:25.215 00:33:26.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.154 Nvme0n1 : 9.00 21623.67 84.47 0.00 0.00 0.00 0.00 0.00 00:33:26.154 [2024-11-20T14:43:15.114Z] =================================================================================================================== 00:33:26.154 [2024-11-20T14:43:15.114Z] Total : 21623.67 84.47 0.00 0.00 0.00 0.00 0.00 00:33:26.154 00:33:27.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.095 Nvme0n1 : 10.00 22001.30 85.94 0.00 0.00 0.00 0.00 0.00 00:33:27.095 [2024-11-20T14:43:16.055Z] =================================================================================================================== 00:33:27.095 [2024-11-20T14:43:16.055Z] Total : 22001.30 85.94 0.00 0.00 0.00 0.00 0.00 00:33:27.095 00:33:27.095 00:33:27.095 Latency(us) 00:33:27.095 [2024-11-20T14:43:16.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.095 Nvme0n1 : 10.00 22008.35 85.97 0.00 0.00 5813.33 2880.85 31457.28 00:33:27.095 [2024-11-20T14:43:16.055Z] =================================================================================================================== 00:33:27.095 [2024-11-20T14:43:16.055Z] Total : 22008.35 85.97 0.00 0.00 5813.33 2880.85 31457.28 00:33:27.095 { 00:33:27.095 "results": [ 00:33:27.095 { 00:33:27.095 "job": "Nvme0n1", 00:33:27.095 "core_mask": "0x2", 00:33:27.095 "workload": "randwrite", 00:33:27.096 "status": "finished", 00:33:27.096 "queue_depth": 128, 00:33:27.096 "io_size": 4096, 00:33:27.096 "runtime": 10.002611, 00:33:27.096 "iops": 22008.353618870115, 00:33:27.096 "mibps": 85.97013132371139, 00:33:27.096 "io_failed": 0, 00:33:27.096 "io_timeout": 0, 00:33:27.096 "avg_latency_us": 5813.325695077246, 00:33:27.096 "min_latency_us": 2880.8533333333335, 00:33:27.096 "max_latency_us": 31457.28 00:33:27.096 } 00:33:27.096 ], 00:33:27.096 "core_count": 1 00:33:27.096 } 00:33:27.096 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 837923 00:33:27.096 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 837923 ']' 00:33:27.096 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 837923 00:33:27.096 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:27.096 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.096 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 837923 00:33:27.357 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:27.357 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:27.357 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 837923' 00:33:27.357 killing process with pid 837923 00:33:27.357 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 837923 00:33:27.357 Received shutdown signal, test time was about 10.000000 seconds 00:33:27.357 00:33:27.357 Latency(us) 00:33:27.357 [2024-11-20T14:43:16.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.357 [2024-11-20T14:43:16.317Z] =================================================================================================================== 00:33:27.357 [2024-11-20T14:43:16.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.357 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 837923 00:33:27.357 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:27.617 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 834148 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 834148 00:33:27.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 834148 Killed "${NVMF_APP[@]}" "$@" 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=840188 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 840188 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 840188 ']' 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.878 15:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:28.138 [2024-11-20 15:43:16.865003] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:28.138 [2024-11-20 15:43:16.866037] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:28.138 [2024-11-20 15:43:16.866086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.138 [2024-11-20 15:43:16.961698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.138 [2024-11-20 15:43:16.993991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.138 [2024-11-20 15:43:16.994020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.138 [2024-11-20 15:43:16.994026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.138 [2024-11-20 15:43:16.994031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.138 [2024-11-20 15:43:16.994035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.138 [2024-11-20 15:43:16.994536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.138 [2024-11-20 15:43:17.045792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:28.138 [2024-11-20 15:43:17.045987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:28.707 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.707 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:28.707 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:28.707 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.707 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:28.968 [2024-11-20 15:43:17.860780] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:28.968 [2024-11-20 15:43:17.861026] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:28.968 [2024-11-20 15:43:17.861116] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:28.968 15:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:29.229 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f76535e7-add1-4f10-a0f7-40076c5a5084 -t 2000 00:33:29.489 [ 00:33:29.489 { 00:33:29.489 "name": "f76535e7-add1-4f10-a0f7-40076c5a5084", 00:33:29.489 "aliases": [ 00:33:29.489 "lvs/lvol" 00:33:29.489 ], 00:33:29.489 "product_name": "Logical Volume", 00:33:29.489 "block_size": 4096, 00:33:29.489 "num_blocks": 38912, 00:33:29.489 "uuid": "f76535e7-add1-4f10-a0f7-40076c5a5084", 00:33:29.489 "assigned_rate_limits": { 00:33:29.489 "rw_ios_per_sec": 0, 00:33:29.489 "rw_mbytes_per_sec": 0, 00:33:29.489 "r_mbytes_per_sec": 0, 00:33:29.489 "w_mbytes_per_sec": 0 00:33:29.489 }, 00:33:29.489 "claimed": false, 00:33:29.489 "zoned": false, 00:33:29.489 "supported_io_types": { 00:33:29.489 "read": true, 00:33:29.489 "write": true, 00:33:29.489 "unmap": true, 00:33:29.489 "flush": false, 00:33:29.489 "reset": true, 00:33:29.489 "nvme_admin": false, 00:33:29.489 "nvme_io": false, 00:33:29.489 "nvme_io_md": false, 00:33:29.489 "write_zeroes": true, 00:33:29.489 "zcopy": false, 00:33:29.489 "get_zone_info": false, 00:33:29.489 "zone_management": false, 00:33:29.489 "zone_append": false, 00:33:29.489 "compare": false, 00:33:29.489 "compare_and_write": false, 00:33:29.489 "abort": false, 00:33:29.489 "seek_hole": true, 00:33:29.489 "seek_data": true, 00:33:29.489 "copy": false, 00:33:29.489 "nvme_iov_md": false 00:33:29.489 }, 00:33:29.489 "driver_specific": { 00:33:29.489 "lvol": { 00:33:29.489 "lvol_store_uuid": "e6a4ff51-38ed-4565-a165-e7ef5b4c3373", 00:33:29.489 "base_bdev": "aio_bdev", 00:33:29.489 "thin_provision": false, 00:33:29.489 "num_allocated_clusters": 38, 00:33:29.489 "snapshot": false, 00:33:29.489 "clone": false, 00:33:29.489 "esnap_clone": false 00:33:29.489 } 00:33:29.489 } 00:33:29.489 } 00:33:29.489 ] 00:33:29.489 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:29.489 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:29.489 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:29.489 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:29.489 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:29.489 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:29.750 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:29.750 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:30.010 [2024-11-20 15:43:18.755036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:30.010 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:30.270 request: 00:33:30.270 { 00:33:30.270 "uuid": "e6a4ff51-38ed-4565-a165-e7ef5b4c3373", 00:33:30.270 "method": "bdev_lvol_get_lvstores", 00:33:30.270 "req_id": 1 00:33:30.270 } 00:33:30.270 Got JSON-RPC error response 00:33:30.270 response: 00:33:30.270 { 00:33:30.270 "code": -19, 00:33:30.270 "message": "No such device" 00:33:30.270 } 00:33:30.270 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:30.270 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:30.270 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:30.270 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:30.270 15:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:30.270 aio_bdev 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:30.271 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:30.530 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f76535e7-add1-4f10-a0f7-40076c5a5084 -t 2000 00:33:30.789 [ 00:33:30.789 { 00:33:30.789 "name": "f76535e7-add1-4f10-a0f7-40076c5a5084", 00:33:30.789 "aliases": [ 00:33:30.789 "lvs/lvol" 00:33:30.789 ], 00:33:30.789 "product_name": "Logical Volume", 00:33:30.789 "block_size": 4096, 00:33:30.789 "num_blocks": 38912, 00:33:30.789 "uuid": "f76535e7-add1-4f10-a0f7-40076c5a5084", 00:33:30.789 "assigned_rate_limits": { 00:33:30.789 "rw_ios_per_sec": 0, 00:33:30.789 "rw_mbytes_per_sec": 0, 00:33:30.789 "r_mbytes_per_sec": 0, 00:33:30.789 "w_mbytes_per_sec": 0 00:33:30.789 }, 00:33:30.789 "claimed": false, 00:33:30.789 "zoned": false, 00:33:30.789 "supported_io_types": { 00:33:30.789 "read": true, 00:33:30.789 "write": true, 00:33:30.789 "unmap": true, 00:33:30.789 "flush": false, 00:33:30.789 "reset": true, 00:33:30.789 "nvme_admin": false, 00:33:30.789 "nvme_io": false, 00:33:30.789 "nvme_io_md": false, 00:33:30.789 "write_zeroes": true, 00:33:30.789 "zcopy": false, 00:33:30.789 "get_zone_info": false, 00:33:30.789 "zone_management": false, 00:33:30.789 "zone_append": false, 00:33:30.789 "compare": false, 00:33:30.789 "compare_and_write": false, 00:33:30.789 "abort": false, 00:33:30.789 "seek_hole": true, 00:33:30.789 "seek_data": true, 00:33:30.789 "copy": false, 00:33:30.789 "nvme_iov_md": false 00:33:30.789 }, 00:33:30.789 "driver_specific": { 00:33:30.789 "lvol": { 00:33:30.789 "lvol_store_uuid": "e6a4ff51-38ed-4565-a165-e7ef5b4c3373", 00:33:30.789 "base_bdev": "aio_bdev", 00:33:30.789 "thin_provision": false, 00:33:30.789 "num_allocated_clusters": 38, 00:33:30.789 "snapshot": false, 00:33:30.789 "clone": false, 00:33:30.789 "esnap_clone": false 00:33:30.789 } 00:33:30.789 } 00:33:30.789 } 00:33:30.789 ] 00:33:30.789 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:30.789 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:30.790 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:30.790 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:30.790 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:30.790 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:31.050 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:31.050 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f76535e7-add1-4f10-a0f7-40076c5a5084 00:33:31.050 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6a4ff51-38ed-4565-a165-e7ef5b4c3373 00:33:31.310 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:31.571 00:33:31.571 real 0m17.552s 00:33:31.571 user 0m35.316s 00:33:31.571 sys 0m3.227s 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:31.571 ************************************ 00:33:31.571 END TEST lvs_grow_dirty 00:33:31.571 ************************************ 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:31.571 nvmf_trace.0 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.571 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:31.572 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.572 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.832 rmmod nvme_tcp 00:33:31.832 rmmod nvme_fabrics 00:33:31.832 rmmod nvme_keyring 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 840188 ']' 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 840188 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 840188 ']' 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 840188 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 840188 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 840188' 00:33:31.832 killing process with pid 840188 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 840188 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 840188 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:31.832 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.093 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.093 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.093 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.093 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.093 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.005 00:33:34.005 real 0m45.042s 00:33:34.005 user 0m53.896s 00:33:34.005 sys 0m10.986s 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:34.005 ************************************ 00:33:34.005 END TEST nvmf_lvs_grow 00:33:34.005 ************************************ 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:34.005 ************************************ 00:33:34.005 START TEST nvmf_bdev_io_wait 00:33:34.005 ************************************ 00:33:34.005 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:34.265 * Looking for test storage... 00:33:34.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.265 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.266 --rc genhtml_branch_coverage=1 00:33:34.266 --rc genhtml_function_coverage=1 00:33:34.266 --rc genhtml_legend=1 00:33:34.266 --rc geninfo_all_blocks=1 00:33:34.266 --rc geninfo_unexecuted_blocks=1 00:33:34.266 00:33:34.266 ' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.266 --rc genhtml_branch_coverage=1 00:33:34.266 --rc genhtml_function_coverage=1 00:33:34.266 --rc genhtml_legend=1 00:33:34.266 --rc geninfo_all_blocks=1 00:33:34.266 --rc geninfo_unexecuted_blocks=1 00:33:34.266 00:33:34.266 ' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.266 --rc genhtml_branch_coverage=1 00:33:34.266 --rc genhtml_function_coverage=1 00:33:34.266 --rc genhtml_legend=1 00:33:34.266 --rc geninfo_all_blocks=1 00:33:34.266 --rc geninfo_unexecuted_blocks=1 00:33:34.266 00:33:34.266 ' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.266 --rc genhtml_branch_coverage=1 00:33:34.266 --rc genhtml_function_coverage=1 00:33:34.266 --rc genhtml_legend=1 00:33:34.266 --rc geninfo_all_blocks=1 00:33:34.266 --rc geninfo_unexecuted_blocks=1 00:33:34.266 00:33:34.266 ' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:34.266 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.267 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.479 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:42.480 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:42.480 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:42.480 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:42.480 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.480 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:42.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:33:42.481 00:33:42.481 --- 10.0.0.2 ping statistics --- 00:33:42.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.481 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:33:42.481 00:33:42.481 --- 10.0.0.1 ping statistics --- 00:33:42.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.481 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=845031 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 845031 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 845031 ']' 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.481 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:42.481 [2024-11-20 15:43:30.784310] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:42.481 [2024-11-20 15:43:30.785428] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:42.481 [2024-11-20 15:43:30.785479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.481 [2024-11-20 15:43:30.887501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:42.481 [2024-11-20 15:43:30.942707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.481 [2024-11-20 15:43:30.942761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.481 [2024-11-20 15:43:30.942770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.481 [2024-11-20 15:43:30.942777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.481 [2024-11-20 15:43:30.942784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.481 [2024-11-20 15:43:30.944829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.481 [2024-11-20 15:43:30.944988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:42.481 [2024-11-20 15:43:30.945131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:42.481 [2024-11-20 15:43:30.945133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.481 [2024-11-20 15:43:30.945592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.742 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:43.004 [2024-11-20 15:43:31.713908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:43.004 [2024-11-20 15:43:31.714828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:43.004 [2024-11-20 15:43:31.714980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:43.004 [2024-11-20 15:43:31.715118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:43.004 [2024-11-20 15:43:31.726106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:43.004 Malloc0 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:43.004 [2024-11-20 15:43:31.798303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=845381 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=845383 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.004 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.004 { 00:33:43.004 "params": { 00:33:43.004 "name": "Nvme$subsystem", 00:33:43.004 "trtype": "$TEST_TRANSPORT", 00:33:43.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.004 "adrfam": "ipv4", 00:33:43.004 "trsvcid": "$NVMF_PORT", 00:33:43.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.004 "hdgst": ${hdgst:-false}, 00:33:43.004 "ddgst": ${ddgst:-false} 00:33:43.004 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 } 00:33:43.005 EOF 00:33:43.005 )") 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=845385 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.005 { 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme$subsystem", 00:33:43.005 "trtype": "$TEST_TRANSPORT", 00:33:43.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "$NVMF_PORT", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.005 "hdgst": ${hdgst:-false}, 00:33:43.005 "ddgst": ${ddgst:-false} 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 } 00:33:43.005 EOF 00:33:43.005 )") 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=845388 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.005 { 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme$subsystem", 00:33:43.005 "trtype": "$TEST_TRANSPORT", 00:33:43.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "$NVMF_PORT", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.005 "hdgst": ${hdgst:-false}, 00:33:43.005 "ddgst": ${ddgst:-false} 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 } 00:33:43.005 EOF 00:33:43.005 )") 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.005 { 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme$subsystem", 00:33:43.005 "trtype": "$TEST_TRANSPORT", 00:33:43.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "$NVMF_PORT", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.005 "hdgst": ${hdgst:-false}, 00:33:43.005 "ddgst": ${ddgst:-false} 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 } 00:33:43.005 EOF 00:33:43.005 )") 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 845381 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme1", 00:33:43.005 "trtype": "tcp", 00:33:43.005 "traddr": "10.0.0.2", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "4420", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.005 "hdgst": false, 00:33:43.005 "ddgst": false 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 }' 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme1", 00:33:43.005 "trtype": "tcp", 00:33:43.005 "traddr": "10.0.0.2", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "4420", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.005 "hdgst": false, 00:33:43.005 "ddgst": false 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 }' 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme1", 00:33:43.005 "trtype": "tcp", 00:33:43.005 "traddr": "10.0.0.2", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "4420", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.005 "hdgst": false, 00:33:43.005 "ddgst": false 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 }' 00:33:43.005 15:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.005 "params": { 00:33:43.005 "name": "Nvme1", 00:33:43.005 "trtype": "tcp", 00:33:43.005 "traddr": "10.0.0.2", 00:33:43.005 "adrfam": "ipv4", 00:33:43.005 "trsvcid": "4420", 00:33:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.005 "hdgst": false, 00:33:43.005 "ddgst": false 00:33:43.005 }, 00:33:43.005 "method": "bdev_nvme_attach_controller" 00:33:43.005 }' 00:33:43.005 [2024-11-20 15:43:31.857448] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:43.005 [2024-11-20 15:43:31.857523] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:43.005 [2024-11-20 15:43:31.857649] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:43.005 [2024-11-20 15:43:31.857714] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:43.006 [2024-11-20 15:43:31.860816] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:43.006 [2024-11-20 15:43:31.860831] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:43.006 [2024-11-20 15:43:31.860889] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:43.006 [2024-11-20 15:43:31.860897] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:43.267 [2024-11-20 15:43:32.083612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.267 [2024-11-20 15:43:32.123905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:43.268 [2024-11-20 15:43:32.174795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.268 [2024-11-20 15:43:32.213855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:43.529 [2024-11-20 15:43:32.243010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.529 [2024-11-20 15:43:32.277392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:43.529 [2024-11-20 15:43:32.307269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.529 [2024-11-20 15:43:32.346588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:43.529 Running I/O for 1 seconds... 00:33:43.529 Running I/O for 1 seconds... 00:33:43.788 Running I/O for 1 seconds... 00:33:43.788 Running I/O for 1 seconds... 00:33:44.730 12779.00 IOPS, 49.92 MiB/s 00:33:44.730 Latency(us) 00:33:44.730 [2024-11-20T14:43:33.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.730 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:44.730 Nvme1n1 : 1.01 12822.66 50.09 0.00 0.00 9947.09 5051.73 12451.84 00:33:44.730 [2024-11-20T14:43:33.690Z] =================================================================================================================== 00:33:44.730 [2024-11-20T14:43:33.690Z] Total : 12822.66 50.09 0.00 0.00 9947.09 5051.73 12451.84 00:33:44.730 6491.00 IOPS, 25.36 MiB/s 00:33:44.730 Latency(us) 00:33:44.730 [2024-11-20T14:43:33.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.730 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:44.730 Nvme1n1 : 1.02 6558.25 25.62 0.00 0.00 19428.81 2894.51 33204.91 00:33:44.730 [2024-11-20T14:43:33.690Z] =================================================================================================================== 00:33:44.730 [2024-11-20T14:43:33.690Z] Total : 6558.25 25.62 0.00 0.00 19428.81 2894.51 33204.91 00:33:44.730 181120.00 IOPS, 707.50 MiB/s 00:33:44.730 Latency(us) 00:33:44.730 [2024-11-20T14:43:33.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.730 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:44.731 Nvme1n1 : 1.00 180763.04 706.11 0.00 0.00 703.82 305.49 1979.73 00:33:44.731 [2024-11-20T14:43:33.691Z] =================================================================================================================== 00:33:44.731 [2024-11-20T14:43:33.691Z] Total : 180763.04 706.11 0.00 0.00 703.82 305.49 1979.73 00:33:44.731 6485.00 IOPS, 25.33 MiB/s 00:33:44.731 Latency(us) 00:33:44.731 [2024-11-20T14:43:33.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.731 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:44.731 Nvme1n1 : 1.01 6600.49 25.78 0.00 0.00 19332.63 4560.21 37355.52 00:33:44.731 [2024-11-20T14:43:33.691Z] =================================================================================================================== 00:33:44.731 [2024-11-20T14:43:33.691Z] Total : 6600.49 25.78 0.00 0.00 19332.63 4560.21 37355.52 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 845383 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 845385 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 845388 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:44.731 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:44.731 rmmod nvme_tcp 00:33:44.992 rmmod nvme_fabrics 00:33:44.992 rmmod nvme_keyring 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 845031 ']' 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 845031 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 845031 ']' 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 845031 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 845031 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 845031' 00:33:44.992 killing process with pid 845031 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 845031 00:33:44.992 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 845031 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.253 15:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:47.166 00:33:47.166 real 0m13.100s 00:33:47.166 user 0m15.939s 00:33:47.166 sys 0m7.619s 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:47.166 ************************************ 00:33:47.166 END TEST nvmf_bdev_io_wait 00:33:47.166 ************************************ 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.166 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:47.428 ************************************ 00:33:47.428 START TEST nvmf_queue_depth 00:33:47.428 ************************************ 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:47.428 * Looking for test storage... 00:33:47.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.428 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:47.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.429 --rc genhtml_branch_coverage=1 00:33:47.429 --rc genhtml_function_coverage=1 00:33:47.429 --rc genhtml_legend=1 00:33:47.429 --rc geninfo_all_blocks=1 00:33:47.429 --rc geninfo_unexecuted_blocks=1 00:33:47.429 00:33:47.429 ' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:47.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.429 --rc genhtml_branch_coverage=1 00:33:47.429 --rc genhtml_function_coverage=1 00:33:47.429 --rc genhtml_legend=1 00:33:47.429 --rc geninfo_all_blocks=1 00:33:47.429 --rc geninfo_unexecuted_blocks=1 00:33:47.429 00:33:47.429 ' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:47.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.429 --rc genhtml_branch_coverage=1 00:33:47.429 --rc genhtml_function_coverage=1 00:33:47.429 --rc genhtml_legend=1 00:33:47.429 --rc geninfo_all_blocks=1 00:33:47.429 --rc geninfo_unexecuted_blocks=1 00:33:47.429 00:33:47.429 ' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:47.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.429 --rc genhtml_branch_coverage=1 00:33:47.429 --rc genhtml_function_coverage=1 00:33:47.429 --rc genhtml_legend=1 00:33:47.429 --rc geninfo_all_blocks=1 00:33:47.429 --rc geninfo_unexecuted_blocks=1 00:33:47.429 00:33:47.429 ' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.429 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.691 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:47.691 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:47.691 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:47.691 15:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:55.834 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.834 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:55.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:55.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:55.835 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:55.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:33:55.835 00:33:55.835 --- 10.0.0.2 ping statistics --- 00:33:55.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.835 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:33:55.835 00:33:55.835 --- 10.0.0.1 ping statistics --- 00:33:55.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.835 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=849831 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 849831 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 849831 ']' 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.835 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:55.835 [2024-11-20 15:43:43.924083] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:55.835 [2024-11-20 15:43:43.925193] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:55.835 [2024-11-20 15:43:43.925243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.835 [2024-11-20 15:43:44.029419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.835 [2024-11-20 15:43:44.080226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.835 [2024-11-20 15:43:44.080278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.835 [2024-11-20 15:43:44.080287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.836 [2024-11-20 15:43:44.080294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.836 [2024-11-20 15:43:44.080300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.836 [2024-11-20 15:43:44.081025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.836 [2024-11-20 15:43:44.159322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:55.836 [2024-11-20 15:43:44.159611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.836 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:55.836 [2024-11-20 15:43:44.789879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:56.097 Malloc0 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:56.097 [2024-11-20 15:43:44.870033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=850092 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 850092 /var/tmp/bdevperf.sock 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 850092 ']' 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:56.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:56.097 15:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:56.097 [2024-11-20 15:43:44.944752] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:33:56.097 [2024-11-20 15:43:44.944827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850092 ] 00:33:56.097 [2024-11-20 15:43:45.037253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.357 [2024-11-20 15:43:45.089937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.930 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.930 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:56.930 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:56.931 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.931 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:56.931 NVMe0n1 00:33:56.931 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.931 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:57.191 Running I/O for 10 seconds... 00:33:59.076 8202.00 IOPS, 32.04 MiB/s [2024-11-20T14:43:48.979Z] 8687.00 IOPS, 33.93 MiB/s [2024-11-20T14:43:50.365Z] 9217.33 IOPS, 36.01 MiB/s [2024-11-20T14:43:51.306Z] 10239.25 IOPS, 40.00 MiB/s [2024-11-20T14:43:52.248Z] 10868.60 IOPS, 42.46 MiB/s [2024-11-20T14:43:53.189Z] 11325.83 IOPS, 44.24 MiB/s [2024-11-20T14:43:54.129Z] 11671.57 IOPS, 45.59 MiB/s [2024-11-20T14:43:55.068Z] 11897.75 IOPS, 46.48 MiB/s [2024-11-20T14:43:56.010Z] 12058.67 IOPS, 47.10 MiB/s [2024-11-20T14:43:56.270Z] 12235.20 IOPS, 47.79 MiB/s 00:34:07.310 Latency(us) 00:34:07.310 [2024-11-20T14:43:56.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.310 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:07.310 Verification LBA range: start 0x0 length 0x4000 00:34:07.310 NVMe0n1 : 10.06 12266.69 47.92 0.00 0.00 83158.32 17585.49 75584.85 00:34:07.310 [2024-11-20T14:43:56.270Z] =================================================================================================================== 00:34:07.310 [2024-11-20T14:43:56.270Z] Total : 12266.69 47.92 0.00 0.00 83158.32 17585.49 75584.85 00:34:07.310 { 00:34:07.310 "results": [ 00:34:07.310 { 00:34:07.310 "job": "NVMe0n1", 00:34:07.310 "core_mask": "0x1", 00:34:07.310 "workload": "verify", 00:34:07.310 "status": "finished", 00:34:07.310 "verify_range": { 00:34:07.310 "start": 0, 00:34:07.310 "length": 16384 00:34:07.310 }, 00:34:07.310 "queue_depth": 1024, 00:34:07.310 "io_size": 4096, 00:34:07.310 "runtime": 10.055528, 00:34:07.310 "iops": 12266.685548486364, 00:34:07.310 "mibps": 47.91674042377486, 00:34:07.310 "io_failed": 0, 00:34:07.310 "io_timeout": 0, 00:34:07.310 "avg_latency_us": 83158.32239971463, 00:34:07.310 "min_latency_us": 17585.493333333332, 00:34:07.310 "max_latency_us": 75584.85333333333 00:34:07.310 } 00:34:07.310 ], 00:34:07.310 "core_count": 1 00:34:07.310 } 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 850092 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 850092 ']' 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 850092 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850092 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850092' 00:34:07.310 killing process with pid 850092 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 850092 00:34:07.310 Received shutdown signal, test time was about 10.000000 seconds 00:34:07.310 00:34:07.310 Latency(us) 00:34:07.310 [2024-11-20T14:43:56.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.310 [2024-11-20T14:43:56.270Z] =================================================================================================================== 00:34:07.310 [2024-11-20T14:43:56.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 850092 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:07.310 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:07.310 rmmod nvme_tcp 00:34:07.310 rmmod nvme_fabrics 00:34:07.571 rmmod nvme_keyring 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 849831 ']' 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 849831 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 849831 ']' 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 849831 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849831 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849831' 00:34:07.571 killing process with pid 849831 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 849831 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 849831 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.571 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.119 00:34:10.119 real 0m22.428s 00:34:10.119 user 0m24.543s 00:34:10.119 sys 0m7.506s 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:10.119 ************************************ 00:34:10.119 END TEST nvmf_queue_depth 00:34:10.119 ************************************ 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:10.119 ************************************ 00:34:10.119 START TEST nvmf_target_multipath 00:34:10.119 ************************************ 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:10.119 * Looking for test storage... 00:34:10.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.119 --rc genhtml_branch_coverage=1 00:34:10.119 --rc genhtml_function_coverage=1 00:34:10.119 --rc genhtml_legend=1 00:34:10.119 --rc geninfo_all_blocks=1 00:34:10.119 --rc geninfo_unexecuted_blocks=1 00:34:10.119 00:34:10.119 ' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.119 --rc genhtml_branch_coverage=1 00:34:10.119 --rc genhtml_function_coverage=1 00:34:10.119 --rc genhtml_legend=1 00:34:10.119 --rc geninfo_all_blocks=1 00:34:10.119 --rc geninfo_unexecuted_blocks=1 00:34:10.119 00:34:10.119 ' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.119 --rc genhtml_branch_coverage=1 00:34:10.119 --rc genhtml_function_coverage=1 00:34:10.119 --rc genhtml_legend=1 00:34:10.119 --rc geninfo_all_blocks=1 00:34:10.119 --rc geninfo_unexecuted_blocks=1 00:34:10.119 00:34:10.119 ' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.119 --rc genhtml_branch_coverage=1 00:34:10.119 --rc genhtml_function_coverage=1 00:34:10.119 --rc genhtml_legend=1 00:34:10.119 --rc geninfo_all_blocks=1 00:34:10.119 --rc geninfo_unexecuted_blocks=1 00:34:10.119 00:34:10.119 ' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.119 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.120 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:18.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:18.264 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:18.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:18.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:18.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:34:18.265 00:34:18.265 --- 10.0.0.2 ping statistics --- 00:34:18.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.265 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:34:18.265 00:34:18.265 --- 10.0.0.1 ping statistics --- 00:34:18.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.265 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:18.265 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:18.266 only one NIC for nvmf test 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:18.266 rmmod nvme_tcp 00:34:18.266 rmmod nvme_fabrics 00:34:18.266 rmmod nvme_keyring 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.266 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:19.721 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.722 00:34:19.722 real 0m10.006s 00:34:19.722 user 0m2.201s 00:34:19.722 sys 0m5.755s 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.722 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:19.722 ************************************ 00:34:19.722 END TEST nvmf_target_multipath 00:34:19.722 ************************************ 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:19.984 ************************************ 00:34:19.984 START TEST nvmf_zcopy 00:34:19.984 ************************************ 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:19.984 * Looking for test storage... 00:34:19.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.984 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:20.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.245 --rc genhtml_branch_coverage=1 00:34:20.245 --rc genhtml_function_coverage=1 00:34:20.245 --rc genhtml_legend=1 00:34:20.245 --rc geninfo_all_blocks=1 00:34:20.245 --rc geninfo_unexecuted_blocks=1 00:34:20.245 00:34:20.245 ' 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:20.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.245 --rc genhtml_branch_coverage=1 00:34:20.245 --rc genhtml_function_coverage=1 00:34:20.245 --rc genhtml_legend=1 00:34:20.245 --rc geninfo_all_blocks=1 00:34:20.245 --rc geninfo_unexecuted_blocks=1 00:34:20.245 00:34:20.245 ' 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:20.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.245 --rc genhtml_branch_coverage=1 00:34:20.245 --rc genhtml_function_coverage=1 00:34:20.245 --rc genhtml_legend=1 00:34:20.245 --rc geninfo_all_blocks=1 00:34:20.245 --rc geninfo_unexecuted_blocks=1 00:34:20.245 00:34:20.245 ' 00:34:20.245 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:20.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.246 --rc genhtml_branch_coverage=1 00:34:20.246 --rc genhtml_function_coverage=1 00:34:20.246 --rc genhtml_legend=1 00:34:20.246 --rc geninfo_all_blocks=1 00:34:20.246 --rc geninfo_unexecuted_blocks=1 00:34:20.246 00:34:20.246 ' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.246 15:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.388 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:28.389 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:28.389 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:28.389 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:28.389 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:34:28.389 00:34:28.389 --- 10.0.0.2 ping statistics --- 00:34:28.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.389 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:34:28.389 00:34:28.389 --- 10.0.0.1 ping statistics --- 00:34:28.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.389 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:28.389 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=860448 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 860448 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 860448 ']' 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.390 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.390 [2024-11-20 15:44:16.522787] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.390 [2024-11-20 15:44:16.523930] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:34:28.390 [2024-11-20 15:44:16.523983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.390 [2024-11-20 15:44:16.624408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.390 [2024-11-20 15:44:16.675042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.390 [2024-11-20 15:44:16.675091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.390 [2024-11-20 15:44:16.675099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.390 [2024-11-20 15:44:16.675106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.390 [2024-11-20 15:44:16.675113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.390 [2024-11-20 15:44:16.675824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.390 [2024-11-20 15:44:16.752465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.390 [2024-11-20 15:44:16.752759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.390 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.390 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:28.390 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:28.390 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.390 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 [2024-11-20 15:44:17.384684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 [2024-11-20 15:44:17.413013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 malloc0 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:28.651 { 00:34:28.651 "params": { 00:34:28.651 "name": "Nvme$subsystem", 00:34:28.651 "trtype": "$TEST_TRANSPORT", 00:34:28.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:28.651 "adrfam": "ipv4", 00:34:28.651 "trsvcid": "$NVMF_PORT", 00:34:28.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:28.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:28.651 "hdgst": ${hdgst:-false}, 00:34:28.651 "ddgst": ${ddgst:-false} 00:34:28.651 }, 00:34:28.651 "method": "bdev_nvme_attach_controller" 00:34:28.651 } 00:34:28.651 EOF 00:34:28.651 )") 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:28.651 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:28.651 "params": { 00:34:28.651 "name": "Nvme1", 00:34:28.651 "trtype": "tcp", 00:34:28.651 "traddr": "10.0.0.2", 00:34:28.651 "adrfam": "ipv4", 00:34:28.651 "trsvcid": "4420", 00:34:28.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:28.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:28.651 "hdgst": false, 00:34:28.651 "ddgst": false 00:34:28.651 }, 00:34:28.651 "method": "bdev_nvme_attach_controller" 00:34:28.651 }' 00:34:28.651 [2024-11-20 15:44:17.516800] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:34:28.652 [2024-11-20 15:44:17.516865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860779 ] 00:34:28.912 [2024-11-20 15:44:17.610193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.912 [2024-11-20 15:44:17.662584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.912 Running I/O for 10 seconds... 00:34:31.240 6390.00 IOPS, 49.92 MiB/s [2024-11-20T14:44:21.143Z] 6452.50 IOPS, 50.41 MiB/s [2024-11-20T14:44:22.085Z] 6469.00 IOPS, 50.54 MiB/s [2024-11-20T14:44:23.026Z] 6755.00 IOPS, 52.77 MiB/s [2024-11-20T14:44:23.967Z] 7340.20 IOPS, 57.35 MiB/s [2024-11-20T14:44:24.908Z] 7726.83 IOPS, 60.37 MiB/s [2024-11-20T14:44:26.292Z] 8001.86 IOPS, 62.51 MiB/s [2024-11-20T14:44:27.233Z] 8209.00 IOPS, 64.13 MiB/s [2024-11-20T14:44:28.176Z] 8369.89 IOPS, 65.39 MiB/s [2024-11-20T14:44:28.176Z] 8500.30 IOPS, 66.41 MiB/s 00:34:39.216 Latency(us) 00:34:39.216 [2024-11-20T14:44:28.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.216 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:39.216 Verification LBA range: start 0x0 length 0x1000 00:34:39.216 Nvme1n1 : 10.01 8504.29 66.44 0.00 0.00 15005.38 778.24 27197.44 00:34:39.216 [2024-11-20T14:44:28.176Z] =================================================================================================================== 00:34:39.216 [2024-11-20T14:44:28.176Z] Total : 8504.29 66.44 0.00 0.00 15005.38 778.24 27197.44 00:34:39.216 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=862774 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:39.216 { 00:34:39.216 "params": { 00:34:39.216 "name": "Nvme$subsystem", 00:34:39.216 "trtype": "$TEST_TRANSPORT", 00:34:39.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.216 "adrfam": "ipv4", 00:34:39.216 "trsvcid": "$NVMF_PORT", 00:34:39.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.216 "hdgst": ${hdgst:-false}, 00:34:39.216 "ddgst": ${ddgst:-false} 00:34:39.216 }, 00:34:39.216 "method": "bdev_nvme_attach_controller" 00:34:39.216 } 00:34:39.216 EOF 00:34:39.216 )") 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:39.216 [2024-11-20 15:44:28.008227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.008255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:39.216 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:39.216 "params": { 00:34:39.216 "name": "Nvme1", 00:34:39.216 "trtype": "tcp", 00:34:39.216 "traddr": "10.0.0.2", 00:34:39.216 "adrfam": "ipv4", 00:34:39.216 "trsvcid": "4420", 00:34:39.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.216 "hdgst": false, 00:34:39.216 "ddgst": false 00:34:39.216 }, 00:34:39.216 "method": "bdev_nvme_attach_controller" 00:34:39.216 }' 00:34:39.216 [2024-11-20 15:44:28.020197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.020205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.032194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.032201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.044194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.044201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.048958] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:34:39.216 [2024-11-20 15:44:28.049005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862774 ] 00:34:39.216 [2024-11-20 15:44:28.056193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.056200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.068193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.068200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.080194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.080201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.092195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.092202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.104192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.104200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.116192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.116199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.128194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.128201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.131190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.216 [2024-11-20 15:44:28.140194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.140202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.152194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.152203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.216 [2024-11-20 15:44:28.160587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.216 [2024-11-20 15:44:28.164194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.216 [2024-11-20 15:44:28.164202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.478 [2024-11-20 15:44:28.176200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.478 [2024-11-20 15:44:28.176213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.478 [2024-11-20 15:44:28.188197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.478 [2024-11-20 15:44:28.188210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.478 [2024-11-20 15:44:28.200196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.478 [2024-11-20 15:44:28.200206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.212196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.212207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.224199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.224210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.236198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.236210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.248196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.248206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.260194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.260203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.272193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.272201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.284193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.284200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.296194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.296201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.308195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.308204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.320194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.320201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.332193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.332199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.344192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.344199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.356193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.356202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.368193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.368200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.380192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.380199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.392191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.392199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.404193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.404201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.416193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.416199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.479 [2024-11-20 15:44:28.428193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.479 [2024-11-20 15:44:28.428199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.440193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.440201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.452199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.452213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 Running I/O for 5 seconds... 00:34:39.740 [2024-11-20 15:44:28.467845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.467861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.481357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.481373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.495396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.495411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.508192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.508207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.521320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.521334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.535527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.535543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.548698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.548712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.562764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.562779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.575648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.575663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.588318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.588333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.601222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.601237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.615583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.615598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.628745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.628759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.643266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.643281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.656752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.656767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.671682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.671698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.740 [2024-11-20 15:44:28.684545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.740 [2024-11-20 15:44:28.684560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.001 [2024-11-20 15:44:28.699488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.001 [2024-11-20 15:44:28.699505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.001 [2024-11-20 15:44:28.712724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.001 [2024-11-20 15:44:28.712738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.001 [2024-11-20 15:44:28.727284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.001 [2024-11-20 15:44:28.727298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.740582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.740596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.755453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.755468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.768154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.768172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.780786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.780800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.795204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.795218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.808333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.808347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.821137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.821152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.835200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.835218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.848338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.848353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.861120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.861134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.875329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.875343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.888499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.888512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.903352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.903367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.916434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.916449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.929137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.929151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.943467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.943482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.002 [2024-11-20 15:44:28.956741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.002 [2024-11-20 15:44:28.956755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:28.971435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:28.971451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:28.984753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:28.984768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:28.998918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:28.998933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.011719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.011735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.024244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.024259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.037205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.037219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.051134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.051149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.064322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.064337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.077475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.077490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.091524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.091543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.104637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.104652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.119193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.119208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.132317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.132332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.144947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.144961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.159190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.159205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.172127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.172142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.184835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.184850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.199282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.199297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.263 [2024-11-20 15:44:29.212146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.263 [2024-11-20 15:44:29.212165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.225489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.225506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.239283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.239298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.252739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.252753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.267363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.267377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.280485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.280499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.295337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.295352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.307997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.308012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.321286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.321301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.335431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.335446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.348165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.348186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.361152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.361171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.375548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.375563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.388675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.388689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.403335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.403349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.416499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.416513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.431719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.431734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.444749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.444764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 [2024-11-20 15:44:29.459541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.459556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.524 18955.00 IOPS, 148.09 MiB/s [2024-11-20T14:44:29.484Z] [2024-11-20 15:44:29.472677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.524 [2024-11-20 15:44:29.472691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.487328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.487343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.500259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.500274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.512890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.512904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.527408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.527424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.540444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.540459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.553196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.553211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.567566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.567581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.580792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.580806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.595444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.595459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.608306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.608321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.621214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.621229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.635510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.635525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.648531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.648545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.663220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.663235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.676314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.676329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.689077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.689091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.703070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.703084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.715950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.715966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.728644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.728658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.785 [2024-11-20 15:44:29.743256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.785 [2024-11-20 15:44:29.743272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.756423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.756438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.769079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.769094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.783300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.783315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.796323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.796338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.809218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.809232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.823307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.823322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.836581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.836595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.851400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.851414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.864567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.864581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.878914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.878929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.891840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.891855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.904643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.904656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.919350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.919365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.932416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.932431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.945116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.945130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.959649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.959663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.972568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.972581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.987156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.987176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.047 [2024-11-20 15:44:29.999890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.047 [2024-11-20 15:44:29.999904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.013466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.013482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.027274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.027289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.040129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.040145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.053004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.053018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.065234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.065248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.077231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.077246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.091915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.091930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.104888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.104902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.119271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.119286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.132130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.132145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.307 [2024-11-20 15:44:30.144717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.307 [2024-11-20 15:44:30.144731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.159261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.159276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.172299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.172313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.184945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.184959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.199536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.199550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.212716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.212730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.227132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.227146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.240272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.240286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.308 [2024-11-20 15:44:30.253475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.308 [2024-11-20 15:44:30.253489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.266963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.266978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.279992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.280006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.292349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.292363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.305027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.305042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.318958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.318974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.332084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.332099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.344881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.344896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.359679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.359698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.372729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.372743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.387352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.387367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.400457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.400472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.413435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.413449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.427397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.427411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.440330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.440345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.452947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.452961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 19079.50 IOPS, 149.06 MiB/s [2024-11-20T14:44:30.529Z] [2024-11-20 15:44:30.467433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.467448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.480325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.480339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.493205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.493219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.507169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.507184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.569 [2024-11-20 15:44:30.520188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.569 [2024-11-20 15:44:30.520202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.533494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.533508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.547312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.547327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.560186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.560201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.572849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.572863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.588083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.588097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.601088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.601102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.830 [2024-11-20 15:44:30.615386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.830 [2024-11-20 15:44:30.615405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.628345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.628360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.641309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.641323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.655591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.655605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.668712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.668726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.682992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.683006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.695733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.695747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.708499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.708512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.723367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.723382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.736343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.736358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.749694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.749708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.763560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.763575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.831 [2024-11-20 15:44:30.776977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.831 [2024-11-20 15:44:30.776991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.791268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.791284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.803989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.804004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.817393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.817408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.831402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.831417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.844748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.844763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.859587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.859602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.872821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.872839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.887440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.887456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.900387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.900402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.092 [2024-11-20 15:44:30.913088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.092 [2024-11-20 15:44:30.913102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:30.927113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:30.927128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:30.940131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:30.940148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:30.953176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:30.953191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:30.967186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:30.967201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:30.980190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:30.980205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:30.992966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:30.992981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:31.007112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:31.007127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:31.020102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:31.020117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:31.033641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:31.033655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.093 [2024-11-20 15:44:31.047373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.093 [2024-11-20 15:44:31.047387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.060544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.060558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.075787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.075802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.088824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.088838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.103959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.103974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.116914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.116928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.131129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.131148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.144480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.144495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.159634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.159649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.172902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.172917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.187671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.187687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.200564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.200578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.215290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.215305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.228036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.228052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.240793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.240807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.255763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.255778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.268700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.268714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.283571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.283585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.296724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.296738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.354 [2024-11-20 15:44:31.311262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.354 [2024-11-20 15:44:31.311277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.615 [2024-11-20 15:44:31.324310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.615 [2024-11-20 15:44:31.324325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.337238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.337253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.351845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.351860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.364584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.364598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.379165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.379180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.392124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.392140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.404784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.404799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.419604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.419619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.432283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.432298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.444800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.444814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.459094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.459108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 19098.33 IOPS, 149.21 MiB/s [2024-11-20T14:44:31.576Z] [2024-11-20 15:44:31.471861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.471876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.484802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.484816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.499804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.499820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.512800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.512814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.527425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.527440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.540373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.540388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.553257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.553271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.616 [2024-11-20 15:44:31.567627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.616 [2024-11-20 15:44:31.567641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.580612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.580626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.595443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.595458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.608729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.608743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.623495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.623509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.636684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.636698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.651094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.651109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.664248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.664263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.677177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.677191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.691132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.691147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.703854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.703868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.716415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.716429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.728984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.728998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.743390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.743404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.756460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.756474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.771102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.771118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.783906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.783920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.796540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.796554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.810872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.810887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:42.876 [2024-11-20 15:44:31.823831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:42.876 [2024-11-20 15:44:31.823845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.836187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.836202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.848951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.848965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.863176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.863191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.875882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.875896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.889348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.889366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.903194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.903209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.916065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.916079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.929192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.929206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.943486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.943501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.956499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.956513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.971328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.971342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.984282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.984297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:31.996677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:31.996691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.011155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.011174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.024093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.024107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.037396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.037410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.051302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.051317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.064339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.064353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.077053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.077067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.137 [2024-11-20 15:44:32.089573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.137 [2024-11-20 15:44:32.089587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.103590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.103605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.116584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.116598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.131729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.131743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.144768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.144786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.159267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.159281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.172352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.172367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.184961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.184976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.199530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.199544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.212694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.212708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.227814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.227829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.240805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.240819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.255610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.255624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.268472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.268485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.283224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.283238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.296030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.296045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.309458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.309472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.323621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.323635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.336824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.336839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.397 [2024-11-20 15:44:32.351270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.397 [2024-11-20 15:44:32.351284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.363908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.363923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.376477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.376491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.391634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.391649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.405115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.405133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.419154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.419172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.432061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.432076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.444979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.444993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.459402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.459417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 19115.25 IOPS, 149.34 MiB/s [2024-11-20T14:44:32.617Z] [2024-11-20 15:44:32.472345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.472360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.485037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.485051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.499171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.499186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.512275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.512289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.524980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.524994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.538996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.539010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.551979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.551994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.564751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.564765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.577471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.577486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.591226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.591240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.657 [2024-11-20 15:44:32.604589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.657 [2024-11-20 15:44:32.604604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.619578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.619593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.632425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.632440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.645475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.645490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.659496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.659512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.672537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.672552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.687543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.687558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.700333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.700348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.713244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.713259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.727575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.727589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.740514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.740528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.755100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.755115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.767987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.768001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.780778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.780792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.795664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.795679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.808805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.808819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.823461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.823476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.836559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.836573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.851631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.851646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.918 [2024-11-20 15:44:32.864855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:43.918 [2024-11-20 15:44:32.864869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.879374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.879389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.892592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.892606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.907467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.907483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.920618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.920632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.935267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.935282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.948335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.948350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.961069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.961084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.975345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.975360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:32.988305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.179 [2024-11-20 15:44:32.988320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.179 [2024-11-20 15:44:33.001529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.001544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.015327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.015342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.028441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.028456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.041353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.041367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.055556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.055570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.068585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.068599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.083064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.083079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.096038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.096052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.108850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.108865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.123409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.123424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.180 [2024-11-20 15:44:33.136134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.180 [2024-11-20 15:44:33.136150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.440 [2024-11-20 15:44:33.149598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.440 [2024-11-20 15:44:33.149613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.440 [2024-11-20 15:44:33.163911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.440 [2024-11-20 15:44:33.163927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.440 [2024-11-20 15:44:33.177210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.440 [2024-11-20 15:44:33.177225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.440 [2024-11-20 15:44:33.191049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.440 [2024-11-20 15:44:33.191064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.440 [2024-11-20 15:44:33.203785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.203800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.216977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.216991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.231387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.231402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.244502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.244516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.259171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.259185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.272223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.272237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.285161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.285176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.300105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.300120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.313355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.313370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.327320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.327335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.340229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.340243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.353178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.353192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.367766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.367781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.380932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.380946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.441 [2024-11-20 15:44:33.395685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.441 [2024-11-20 15:44:33.395700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.409013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.409028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.423788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.423804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.436775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.436789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.451913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.451927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.464900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.464913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 19115.60 IOPS, 149.34 MiB/s [2024-11-20T14:44:33.662Z] [2024-11-20 15:44:33.476863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.476877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 00:34:44.702 Latency(us) 00:34:44.702 [2024-11-20T14:44:33.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.702 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:44.702 Nvme1n1 : 5.01 19117.63 149.36 0.00 0.00 6689.58 2839.89 11905.71 00:34:44.702 [2024-11-20T14:44:33.662Z] =================================================================================================================== 00:34:44.702 [2024-11-20T14:44:33.662Z] Total : 19117.63 149.36 0.00 0.00 6689.58 2839.89 11905.71 00:34:44.702 [2024-11-20 15:44:33.488198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.488211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.500202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.500213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.512199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.512213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.524198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.524209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.536196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.536207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.548194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.548202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.560195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.560207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 [2024-11-20 15:44:33.572194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:44.702 [2024-11-20 15:44:33.572202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:44.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (862774) - No such process 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 862774 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.702 delay0 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:44.964 [2024-11-20 15:44:33.744661] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:51.550 Initializing NVMe Controllers 00:34:51.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:51.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:51.550 Initialization complete. Launching workers. 00:34:51.550 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6845 00:34:51.550 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7132, failed to submit 33 00:34:51.550 success 6985, unsuccessful 147, failed 0 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.550 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.550 rmmod nvme_tcp 00:34:51.550 rmmod nvme_fabrics 00:34:51.550 rmmod nvme_keyring 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 860448 ']' 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 860448 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 860448 ']' 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 860448 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 860448 00:34:51.819 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 860448' 00:34:51.820 killing process with pid 860448 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 860448 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 860448 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.820 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.365 00:34:54.365 real 0m34.040s 00:34:54.365 user 0m43.215s 00:34:54.365 sys 0m12.533s 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:54.365 ************************************ 00:34:54.365 END TEST nvmf_zcopy 00:34:54.365 ************************************ 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:54.365 ************************************ 00:34:54.365 START TEST nvmf_nmic 00:34:54.365 ************************************ 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:54.365 * Looking for test storage... 00:34:54.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:54.365 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.365 --rc genhtml_branch_coverage=1 00:34:54.365 --rc genhtml_function_coverage=1 00:34:54.365 --rc genhtml_legend=1 00:34:54.365 --rc geninfo_all_blocks=1 00:34:54.365 --rc geninfo_unexecuted_blocks=1 00:34:54.365 00:34:54.365 ' 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.365 --rc genhtml_branch_coverage=1 00:34:54.365 --rc genhtml_function_coverage=1 00:34:54.365 --rc genhtml_legend=1 00:34:54.365 --rc geninfo_all_blocks=1 00:34:54.365 --rc geninfo_unexecuted_blocks=1 00:34:54.365 00:34:54.365 ' 00:34:54.365 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.366 --rc genhtml_branch_coverage=1 00:34:54.366 --rc genhtml_function_coverage=1 00:34:54.366 --rc genhtml_legend=1 00:34:54.366 --rc geninfo_all_blocks=1 00:34:54.366 --rc geninfo_unexecuted_blocks=1 00:34:54.366 00:34:54.366 ' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:54.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.366 --rc genhtml_branch_coverage=1 00:34:54.366 --rc genhtml_function_coverage=1 00:34:54.366 --rc genhtml_legend=1 00:34:54.366 --rc geninfo_all_blocks=1 00:34:54.366 --rc geninfo_unexecuted_blocks=1 00:34:54.366 00:34:54.366 ' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:54.366 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:02.505 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:02.505 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.505 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:02.506 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:02.506 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:35:02.506 00:35:02.506 --- 10.0.0.2 ping statistics --- 00:35:02.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.506 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:35:02.506 00:35:02.506 --- 10.0.0.1 ping statistics --- 00:35:02.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.506 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=869120 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 869120 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 869120 ']' 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.506 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.506 [2024-11-20 15:44:50.698115] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:02.506 [2024-11-20 15:44:50.699283] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:35:02.506 [2024-11-20 15:44:50.699334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.506 [2024-11-20 15:44:50.799675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.506 [2024-11-20 15:44:50.856764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.506 [2024-11-20 15:44:50.856816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.506 [2024-11-20 15:44:50.856825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.506 [2024-11-20 15:44:50.856832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.506 [2024-11-20 15:44:50.856840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.506 [2024-11-20 15:44:50.859204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.507 [2024-11-20 15:44:50.859301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.507 [2024-11-20 15:44:50.859460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.507 [2024-11-20 15:44:50.859460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.507 [2024-11-20 15:44:50.937395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:02.507 [2024-11-20 15:44:50.938268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:02.507 [2024-11-20 15:44:50.938658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:02.507 [2024-11-20 15:44:50.939104] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:02.507 [2024-11-20 15:44:50.939149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 [2024-11-20 15:44:51.564732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 Malloc0 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 [2024-11-20 15:44:51.652946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:02.768 test case1: single bdev can't be used in multiple subsystems 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 [2024-11-20 15:44:51.684332] bdev.c:8278:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:02.768 [2024-11-20 15:44:51.684365] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:02.768 [2024-11-20 15:44:51.684374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.768 request: 00:35:02.768 { 00:35:02.768 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:02.768 "namespace": { 00:35:02.768 "bdev_name": "Malloc0", 00:35:02.768 "no_auto_visible": false 00:35:02.768 }, 00:35:02.768 "method": "nvmf_subsystem_add_ns", 00:35:02.768 "req_id": 1 00:35:02.768 } 00:35:02.768 Got JSON-RPC error response 00:35:02.768 response: 00:35:02.768 { 00:35:02.768 "code": -32602, 00:35:02.768 "message": "Invalid parameters" 00:35:02.768 } 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:02.768 Adding namespace failed - expected result. 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:02.768 test case2: host connect to nvmf target in multiple paths 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 [2024-11-20 15:44:51.696524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.768 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:03.340 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:03.914 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:03.914 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:03.914 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:03.915 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:03.915 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:05.832 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:05.832 [global] 00:35:05.832 thread=1 00:35:05.832 invalidate=1 00:35:05.832 rw=write 00:35:05.832 time_based=1 00:35:05.832 runtime=1 00:35:05.832 ioengine=libaio 00:35:05.832 direct=1 00:35:05.832 bs=4096 00:35:05.832 iodepth=1 00:35:05.832 norandommap=0 00:35:05.832 numjobs=1 00:35:05.832 00:35:05.832 verify_dump=1 00:35:05.832 verify_backlog=512 00:35:05.832 verify_state_save=0 00:35:05.832 do_verify=1 00:35:05.832 verify=crc32c-intel 00:35:05.832 [job0] 00:35:05.832 filename=/dev/nvme0n1 00:35:05.832 Could not set queue depth (nvme0n1) 00:35:06.093 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.093 fio-3.35 00:35:06.093 Starting 1 thread 00:35:07.595 00:35:07.595 job0: (groupid=0, jobs=1): err= 0: pid=870230: Wed Nov 20 15:44:56 2024 00:35:07.595 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1016msec) 00:35:07.595 slat (nsec): min=26743, max=27725, avg=27019.67, stdev=315.51 00:35:07.595 clat (usec): min=985, max=41980, avg=38939.86, stdev=9479.85 00:35:07.595 lat (usec): min=1013, max=42007, avg=38966.87, stdev=9479.69 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[41157], 20.00th=[41157], 00:35:07.595 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:07.595 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:07.595 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:07.595 | 99.99th=[42206] 00:35:07.595 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:35:07.595 slat (nsec): min=9144, max=70672, avg=30644.76, stdev=10280.74 00:35:07.595 clat (usec): min=219, max=980, avg=575.62, stdev=98.66 00:35:07.595 lat (usec): min=228, max=1014, avg=606.26, stdev=103.17 00:35:07.595 clat percentiles (usec): 00:35:07.595 | 1.00th=[ 322], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 498], 00:35:07.595 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 594], 00:35:07.595 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:35:07.595 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 979], 99.95th=[ 979], 00:35:07.595 | 99.99th=[ 979] 00:35:07.595 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:07.595 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:07.595 lat (usec) : 250=0.38%, 500=19.62%, 750=75.09%, 1000=1.70% 00:35:07.595 lat (msec) : 50=3.21% 00:35:07.595 cpu : usr=1.28%, sys=1.67%, ctx=530, majf=0, minf=1 00:35:07.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.595 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:07.595 00:35:07.595 Run status group 0 (all jobs): 00:35:07.595 READ: bw=70.9KiB/s (72.6kB/s), 70.9KiB/s-70.9KiB/s (72.6kB/s-72.6kB/s), io=72.0KiB (73.7kB), run=1016-1016msec 00:35:07.595 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:35:07.595 00:35:07.595 Disk stats (read/write): 00:35:07.595 nvme0n1: ios=65/512, merge=0/0, ticks=630/218, in_queue=848, util=93.59% 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:07.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.595 rmmod nvme_tcp 00:35:07.595 rmmod nvme_fabrics 00:35:07.595 rmmod nvme_keyring 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 869120 ']' 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 869120 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 869120 ']' 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 869120 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 869120 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:07.595 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 869120' 00:35:07.596 killing process with pid 869120 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 869120 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 869120 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:07.596 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.857 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.772 00:35:09.772 real 0m15.772s 00:35:09.772 user 0m37.052s 00:35:09.772 sys 0m7.485s 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:09.772 ************************************ 00:35:09.772 END TEST nvmf_nmic 00:35:09.772 ************************************ 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:09.772 ************************************ 00:35:09.772 START TEST nvmf_fio_target 00:35:09.772 ************************************ 00:35:09.772 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:10.035 * Looking for test storage... 00:35:10.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:10.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.035 --rc genhtml_branch_coverage=1 00:35:10.035 --rc genhtml_function_coverage=1 00:35:10.035 --rc genhtml_legend=1 00:35:10.035 --rc geninfo_all_blocks=1 00:35:10.035 --rc geninfo_unexecuted_blocks=1 00:35:10.035 00:35:10.035 ' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:10.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.035 --rc genhtml_branch_coverage=1 00:35:10.035 --rc genhtml_function_coverage=1 00:35:10.035 --rc genhtml_legend=1 00:35:10.035 --rc geninfo_all_blocks=1 00:35:10.035 --rc geninfo_unexecuted_blocks=1 00:35:10.035 00:35:10.035 ' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:10.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.035 --rc genhtml_branch_coverage=1 00:35:10.035 --rc genhtml_function_coverage=1 00:35:10.035 --rc genhtml_legend=1 00:35:10.035 --rc geninfo_all_blocks=1 00:35:10.035 --rc geninfo_unexecuted_blocks=1 00:35:10.035 00:35:10.035 ' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:10.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.035 --rc genhtml_branch_coverage=1 00:35:10.035 --rc genhtml_function_coverage=1 00:35:10.035 --rc genhtml_legend=1 00:35:10.035 --rc geninfo_all_blocks=1 00:35:10.035 --rc geninfo_unexecuted_blocks=1 00:35:10.035 00:35:10.035 ' 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.035 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.036 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:18.179 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:18.179 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:18.179 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:18.179 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.179 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:18.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:35:18.180 00:35:18.180 --- 10.0.0.2 ping statistics --- 00:35:18.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.180 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:35:18.180 00:35:18.180 --- 10.0.0.1 ping statistics --- 00:35:18.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.180 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=874770 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 874770 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 874770 ']' 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.180 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.180 [2024-11-20 15:45:06.617255] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:18.180 [2024-11-20 15:45:06.618385] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:35:18.180 [2024-11-20 15:45:06.618434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.180 [2024-11-20 15:45:06.719127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:18.180 [2024-11-20 15:45:06.772957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.180 [2024-11-20 15:45:06.773014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.180 [2024-11-20 15:45:06.773025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.180 [2024-11-20 15:45:06.773032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.180 [2024-11-20 15:45:06.773038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.180 [2024-11-20 15:45:06.775074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.180 [2024-11-20 15:45:06.775230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.180 [2024-11-20 15:45:06.775329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:18.180 [2024-11-20 15:45:06.775330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.180 [2024-11-20 15:45:06.852717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:18.180 [2024-11-20 15:45:06.853842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:18.180 [2024-11-20 15:45:06.853911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:18.180 [2024-11-20 15:45:06.854290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:18.180 [2024-11-20 15:45:06.854336] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.752 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:18.752 [2024-11-20 15:45:07.660619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.012 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:19.012 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:19.012 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:19.364 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:19.364 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:19.626 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:19.626 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:19.626 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:19.626 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:19.889 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:20.150 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:20.150 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:20.413 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:20.413 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:20.413 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:20.413 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:20.673 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:20.934 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:20.934 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:21.195 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:21.195 15:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:21.195 15:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:21.456 [2024-11-20 15:45:10.296585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.456 15:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:21.718 15:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:21.979 15:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:22.239 15:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:22.239 15:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:22.239 15:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:22.239 15:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:22.239 15:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:22.239 15:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:24.784 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:24.784 [global] 00:35:24.784 thread=1 00:35:24.784 invalidate=1 00:35:24.784 rw=write 00:35:24.784 time_based=1 00:35:24.784 runtime=1 00:35:24.784 ioengine=libaio 00:35:24.784 direct=1 00:35:24.784 bs=4096 00:35:24.784 iodepth=1 00:35:24.784 norandommap=0 00:35:24.784 numjobs=1 00:35:24.784 00:35:24.784 verify_dump=1 00:35:24.784 verify_backlog=512 00:35:24.784 verify_state_save=0 00:35:24.784 do_verify=1 00:35:24.784 verify=crc32c-intel 00:35:24.784 [job0] 00:35:24.784 filename=/dev/nvme0n1 00:35:24.784 [job1] 00:35:24.784 filename=/dev/nvme0n2 00:35:24.784 [job2] 00:35:24.784 filename=/dev/nvme0n3 00:35:24.784 [job3] 00:35:24.784 filename=/dev/nvme0n4 00:35:24.784 Could not set queue depth (nvme0n1) 00:35:24.784 Could not set queue depth (nvme0n2) 00:35:24.784 Could not set queue depth (nvme0n3) 00:35:24.784 Could not set queue depth (nvme0n4) 00:35:24.784 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.784 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.784 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.784 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.784 fio-3.35 00:35:24.784 Starting 4 threads 00:35:26.168 00:35:26.168 job0: (groupid=0, jobs=1): err= 0: pid=876794: Wed Nov 20 15:45:14 2024 00:35:26.168 read: IOPS=16, BW=67.2KiB/s (68.8kB/s)(68.0KiB/1012msec) 00:35:26.168 slat (nsec): min=27151, max=28156, avg=27478.29, stdev=310.82 00:35:26.168 clat (usec): min=1297, max=42028, avg=39261.94, stdev=9793.73 00:35:26.168 lat (usec): min=1324, max=42055, avg=39289.41, stdev=9793.68 00:35:26.168 clat percentiles (usec): 00:35:26.168 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[40633], 20.00th=[41157], 00:35:26.168 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:26.168 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:26.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:26.168 | 99.99th=[42206] 00:35:26.168 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:35:26.168 slat (nsec): min=9553, max=66190, avg=33751.64, stdev=9010.90 00:35:26.168 clat (usec): min=146, max=985, avg=623.04, stdev=114.54 00:35:26.168 lat (usec): min=181, max=1021, avg=656.80, stdev=116.84 00:35:26.168 clat percentiles (usec): 00:35:26.168 | 1.00th=[ 359], 5.00th=[ 433], 10.00th=[ 482], 20.00th=[ 537], 00:35:26.168 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:35:26.168 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 799], 00:35:26.168 | 99.00th=[ 889], 99.50th=[ 963], 99.90th=[ 988], 99.95th=[ 988], 00:35:26.168 | 99.99th=[ 988] 00:35:26.168 bw ( KiB/s): min= 4096, max= 4096, per=46.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:26.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:26.168 lat (usec) : 250=0.38%, 500=14.74%, 750=71.83%, 1000=9.83% 00:35:26.168 lat (msec) : 2=0.19%, 50=3.02% 00:35:26.168 cpu : usr=1.58%, sys=1.58%, ctx=531, majf=0, minf=1 00:35:26.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.168 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:26.168 job1: (groupid=0, jobs=1): err= 0: pid=876805: Wed Nov 20 15:45:14 2024 00:35:26.168 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:26.168 slat (nsec): min=27103, max=64451, avg=27948.37, stdev=2802.39 00:35:26.168 clat (usec): min=639, max=1341, avg=985.13, stdev=75.74 00:35:26.168 lat (usec): min=667, max=1369, avg=1013.08, stdev=75.48 00:35:26.168 clat percentiles (usec): 00:35:26.168 | 1.00th=[ 775], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 938], 00:35:26.168 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:35:26.168 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:35:26.168 | 99.00th=[ 1156], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:35:26.168 | 99.99th=[ 1336] 00:35:26.168 write: IOPS=716, BW=2865KiB/s (2934kB/s)(2868KiB/1001msec); 0 zone resets 00:35:26.168 slat (nsec): min=9717, max=68547, avg=31984.76, stdev=9992.38 00:35:26.168 clat (usec): min=204, max=3396, avg=620.64, stdev=170.98 00:35:26.168 lat (usec): min=240, max=3407, avg=652.63, stdev=173.35 00:35:26.168 clat percentiles (usec): 00:35:26.168 | 1.00th=[ 297], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 515], 00:35:26.168 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:35:26.168 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 865], 00:35:26.168 | 99.00th=[ 971], 99.50th=[ 1029], 99.90th=[ 3392], 99.95th=[ 3392], 00:35:26.168 | 99.99th=[ 3392] 00:35:26.168 bw ( KiB/s): min= 4096, max= 4096, per=46.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:26.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:26.168 lat (usec) : 250=0.33%, 500=10.41%, 750=39.22%, 1000=33.03% 00:35:26.168 lat (msec) : 2=16.92%, 4=0.08% 00:35:26.168 cpu : usr=3.20%, sys=4.20%, ctx=1230, majf=0, minf=1 00:35:26.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.168 issued rwts: total=512,717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:26.168 job2: (groupid=0, jobs=1): err= 0: pid=876815: Wed Nov 20 15:45:14 2024 00:35:26.168 read: IOPS=16, BW=65.8KiB/s (67.4kB/s)(68.0KiB/1033msec) 00:35:26.168 slat (nsec): min=25987, max=29815, avg=26496.29, stdev=888.29 00:35:26.168 clat (usec): min=1104, max=42058, avg=39495.08, stdev=9895.62 00:35:26.168 lat (usec): min=1133, max=42084, avg=39521.58, stdev=9894.76 00:35:26.168 clat percentiles (usec): 00:35:26.168 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41681], 00:35:26.168 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:26.168 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:26.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:26.168 | 99.99th=[42206] 00:35:26.168 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:35:26.168 slat (nsec): min=10236, max=56924, avg=29700.97, stdev=10899.23 00:35:26.168 clat (usec): min=250, max=1141, avg=660.39, stdev=160.74 00:35:26.168 lat (usec): min=266, max=1159, avg=690.09, stdev=164.35 00:35:26.168 clat percentiles (usec): 00:35:26.168 | 1.00th=[ 310], 5.00th=[ 392], 10.00th=[ 469], 20.00th=[ 519], 00:35:26.168 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 693], 00:35:26.169 | 70.00th=[ 750], 80.00th=[ 824], 90.00th=[ 873], 95.00th=[ 914], 00:35:26.169 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1139], 99.95th=[ 1139], 00:35:26.169 | 99.99th=[ 1139] 00:35:26.169 bw ( KiB/s): min= 4096, max= 4096, per=46.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:26.169 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:26.169 lat (usec) : 500=15.69%, 750=51.98%, 1000=28.54% 00:35:26.169 lat (msec) : 2=0.76%, 50=3.02% 00:35:26.169 cpu : usr=0.39%, sys=1.74%, ctx=530, majf=0, minf=1 00:35:26.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.169 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:26.169 job3: (groupid=0, jobs=1): err= 0: pid=876817: Wed Nov 20 15:45:14 2024 00:35:26.169 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1017msec) 00:35:26.169 slat (nsec): min=27577, max=28789, avg=27971.38, stdev=345.96 00:35:26.169 clat (usec): min=40967, max=42123, avg=41616.99, stdev=461.32 00:35:26.169 lat (usec): min=40995, max=42150, avg=41644.97, stdev=461.34 00:35:26.169 clat percentiles (usec): 00:35:26.169 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:26.169 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:26.169 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:26.169 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:26.169 | 99.99th=[42206] 00:35:26.169 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:35:26.169 slat (nsec): min=9733, max=56351, avg=33225.73, stdev=9522.13 00:35:26.169 clat (usec): min=232, max=1015, avg=636.10, stdev=129.38 00:35:26.169 lat (usec): min=244, max=1050, avg=669.32, stdev=132.97 00:35:26.169 clat percentiles (usec): 00:35:26.169 | 1.00th=[ 314], 5.00th=[ 412], 10.00th=[ 474], 20.00th=[ 529], 00:35:26.169 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 668], 00:35:26.169 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 848], 00:35:26.169 | 99.00th=[ 930], 99.50th=[ 971], 99.90th=[ 1012], 99.95th=[ 1012], 00:35:26.169 | 99.99th=[ 1012] 00:35:26.169 bw ( KiB/s): min= 4096, max= 4096, per=46.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:26.169 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:26.169 lat (usec) : 250=0.38%, 500=15.15%, 750=65.53%, 1000=15.72% 00:35:26.169 lat (msec) : 2=0.19%, 50=3.03% 00:35:26.169 cpu : usr=1.18%, sys=1.97%, ctx=529, majf=0, minf=1 00:35:26.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.169 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:26.169 00:35:26.169 Run status group 0 (all jobs): 00:35:26.169 READ: bw=2176KiB/s (2228kB/s), 62.9KiB/s-2046KiB/s (64.4kB/s-2095kB/s), io=2248KiB (2302kB), run=1001-1033msec 00:35:26.169 WRITE: bw=8724KiB/s (8933kB/s), 1983KiB/s-2865KiB/s (2030kB/s-2934kB/s), io=9012KiB (9228kB), run=1001-1033msec 00:35:26.169 00:35:26.169 Disk stats (read/write): 00:35:26.169 nvme0n1: ios=34/512, merge=0/0, ticks=1299/245, in_queue=1544, util=83.77% 00:35:26.169 nvme0n2: ios=519/512, merge=0/0, ticks=876/279, in_queue=1155, util=87.64% 00:35:26.169 nvme0n3: ios=34/512, merge=0/0, ticks=1342/315, in_queue=1657, util=91.65% 00:35:26.169 nvme0n4: ios=51/512, merge=0/0, ticks=1356/265, in_queue=1621, util=93.79% 00:35:26.169 15:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:26.169 [global] 00:35:26.169 thread=1 00:35:26.169 invalidate=1 00:35:26.169 rw=randwrite 00:35:26.169 time_based=1 00:35:26.169 runtime=1 00:35:26.169 ioengine=libaio 00:35:26.169 direct=1 00:35:26.169 bs=4096 00:35:26.169 iodepth=1 00:35:26.169 norandommap=0 00:35:26.169 numjobs=1 00:35:26.169 00:35:26.169 verify_dump=1 00:35:26.169 verify_backlog=512 00:35:26.169 verify_state_save=0 00:35:26.169 do_verify=1 00:35:26.169 verify=crc32c-intel 00:35:26.169 [job0] 00:35:26.169 filename=/dev/nvme0n1 00:35:26.169 [job1] 00:35:26.169 filename=/dev/nvme0n2 00:35:26.169 [job2] 00:35:26.169 filename=/dev/nvme0n3 00:35:26.169 [job3] 00:35:26.169 filename=/dev/nvme0n4 00:35:26.169 Could not set queue depth (nvme0n1) 00:35:26.169 Could not set queue depth (nvme0n2) 00:35:26.169 Could not set queue depth (nvme0n3) 00:35:26.169 Could not set queue depth (nvme0n4) 00:35:26.429 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.429 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.429 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.429 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.429 fio-3.35 00:35:26.429 Starting 4 threads 00:35:27.811 00:35:27.811 job0: (groupid=0, jobs=1): err= 0: pid=877277: Wed Nov 20 15:45:16 2024 00:35:27.811 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:27.811 slat (nsec): min=7598, max=61983, avg=26090.69, stdev=4115.57 00:35:27.811 clat (usec): min=546, max=1353, avg=1026.65, stdev=133.18 00:35:27.811 lat (usec): min=573, max=1379, avg=1052.74, stdev=133.82 00:35:27.811 clat percentiles (usec): 00:35:27.811 | 1.00th=[ 693], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 922], 00:35:27.811 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:35:27.811 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1254], 00:35:27.811 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[ 1352], 99.95th=[ 1352], 00:35:27.811 | 99.99th=[ 1352] 00:35:27.811 write: IOPS=666, BW=2665KiB/s (2729kB/s)(2668KiB/1001msec); 0 zone resets 00:35:27.811 slat (nsec): min=9680, max=52882, avg=29869.96, stdev=9043.78 00:35:27.811 clat (usec): min=226, max=1114, avg=647.56, stdev=128.12 00:35:27.811 lat (usec): min=236, max=1147, avg=677.43, stdev=131.94 00:35:27.811 clat percentiles (usec): 00:35:27.811 | 1.00th=[ 310], 5.00th=[ 404], 10.00th=[ 474], 20.00th=[ 537], 00:35:27.811 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:35:27.811 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:35:27.811 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 1123], 99.95th=[ 1123], 00:35:27.811 | 99.99th=[ 1123] 00:35:27.811 bw ( KiB/s): min= 4096, max= 4096, per=47.21%, avg=4096.00, stdev= 0.00, samples=1 00:35:27.811 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:27.811 lat (usec) : 250=0.25%, 500=8.40%, 750=36.90%, 1000=27.14% 00:35:27.811 lat (msec) : 2=27.31% 00:35:27.811 cpu : usr=1.40%, sys=3.80%, ctx=1181, majf=0, minf=1 00:35:27.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.811 issued rwts: total=512,667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.811 job1: (groupid=0, jobs=1): err= 0: pid=877289: Wed Nov 20 15:45:16 2024 00:35:27.811 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:27.811 slat (nsec): min=24179, max=43273, avg=24988.02, stdev=1730.38 00:35:27.811 clat (usec): min=581, max=1915, avg=1175.52, stdev=133.10 00:35:27.811 lat (usec): min=606, max=1940, avg=1200.51, stdev=132.98 00:35:27.811 clat percentiles (usec): 00:35:27.811 | 1.00th=[ 758], 5.00th=[ 955], 10.00th=[ 1020], 20.00th=[ 1090], 00:35:27.811 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:35:27.811 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1352], 00:35:27.811 | 99.00th=[ 1532], 99.50th=[ 1549], 99.90th=[ 1909], 99.95th=[ 1909], 00:35:27.811 | 99.99th=[ 1909] 00:35:27.811 write: IOPS=542, BW=2170KiB/s (2222kB/s)(2172KiB/1001msec); 0 zone resets 00:35:27.811 slat (nsec): min=9309, max=50430, avg=29170.90, stdev=7744.33 00:35:27.811 clat (usec): min=225, max=1247, avg=664.43, stdev=132.92 00:35:27.811 lat (usec): min=256, max=1280, avg=693.60, stdev=134.78 00:35:27.811 clat percentiles (usec): 00:35:27.812 | 1.00th=[ 355], 5.00th=[ 437], 10.00th=[ 498], 20.00th=[ 562], 00:35:27.812 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:35:27.812 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 873], 00:35:27.812 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1254], 99.95th=[ 1254], 00:35:27.812 | 99.99th=[ 1254] 00:35:27.812 bw ( KiB/s): min= 4096, max= 4096, per=47.21%, avg=4096.00, stdev= 0.00, samples=1 00:35:27.812 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:27.812 lat (usec) : 250=0.09%, 500=5.40%, 750=33.36%, 1000=16.21% 00:35:27.812 lat (msec) : 2=44.93% 00:35:27.812 cpu : usr=1.80%, sys=2.80%, ctx=1055, majf=0, minf=2 00:35:27.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.812 issued rwts: total=512,543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.812 job2: (groupid=0, jobs=1): err= 0: pid=877304: Wed Nov 20 15:45:16 2024 00:35:27.812 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1016msec) 00:35:27.812 slat (nsec): min=26230, max=27020, avg=26555.65, stdev=205.27 00:35:27.812 clat (usec): min=1144, max=42088, avg=39529.56, stdev=9892.66 00:35:27.812 lat (usec): min=1170, max=42115, avg=39556.11, stdev=9892.70 00:35:27.812 clat percentiles (usec): 00:35:27.812 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:35:27.812 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:27.812 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:27.812 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:27.812 | 99.99th=[42206] 00:35:27.812 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:35:27.812 slat (nsec): min=9142, max=63343, avg=29477.40, stdev=10695.31 00:35:27.812 clat (usec): min=236, max=1071, avg=633.17, stdev=131.85 00:35:27.812 lat (usec): min=246, max=1104, avg=662.65, stdev=136.24 00:35:27.812 clat percentiles (usec): 00:35:27.812 | 1.00th=[ 347], 5.00th=[ 396], 10.00th=[ 457], 20.00th=[ 523], 00:35:27.812 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:35:27.812 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 840], 00:35:27.812 | 99.00th=[ 930], 99.50th=[ 971], 99.90th=[ 1074], 99.95th=[ 1074], 00:35:27.812 | 99.99th=[ 1074] 00:35:27.812 bw ( KiB/s): min= 4096, max= 4096, per=47.21%, avg=4096.00, stdev= 0.00, samples=1 00:35:27.812 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:27.812 lat (usec) : 250=0.19%, 500=16.26%, 750=61.81%, 1000=18.34% 00:35:27.812 lat (msec) : 2=0.38%, 50=3.02% 00:35:27.812 cpu : usr=0.89%, sys=1.77%, ctx=530, majf=0, minf=1 00:35:27.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.812 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.812 job3: (groupid=0, jobs=1): err= 0: pid=877311: Wed Nov 20 15:45:16 2024 00:35:27.812 read: IOPS=18, BW=73.8KiB/s (75.6kB/s)(76.0KiB/1030msec) 00:35:27.812 slat (nsec): min=28013, max=29123, avg=28363.00, stdev=288.77 00:35:27.812 clat (usec): min=40742, max=41443, avg=40989.32, stdev=171.70 00:35:27.812 lat (usec): min=40770, max=41472, avg=41017.68, stdev=171.72 00:35:27.812 clat percentiles (usec): 00:35:27.812 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:35:27.812 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:27.812 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:27.812 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:27.812 | 99.99th=[41681] 00:35:27.812 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:35:27.812 slat (nsec): min=9161, max=54700, avg=32367.77, stdev=9628.85 00:35:27.812 clat (usec): min=120, max=835, avg=447.90, stdev=138.19 00:35:27.812 lat (usec): min=131, max=870, avg=480.27, stdev=140.32 00:35:27.812 clat percentiles (usec): 00:35:27.812 | 1.00th=[ 145], 5.00th=[ 231], 10.00th=[ 273], 20.00th=[ 322], 00:35:27.812 | 30.00th=[ 371], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 478], 00:35:27.812 | 70.00th=[ 519], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 685], 00:35:27.812 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 832], 99.95th=[ 832], 00:35:27.812 | 99.99th=[ 832] 00:35:27.812 bw ( KiB/s): min= 4096, max= 4096, per=47.21%, avg=4096.00, stdev= 0.00, samples=1 00:35:27.812 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:27.812 lat (usec) : 250=6.78%, 500=57.06%, 750=30.51%, 1000=2.07% 00:35:27.812 lat (msec) : 50=3.58% 00:35:27.812 cpu : usr=1.17%, sys=1.94%, ctx=532, majf=0, minf=1 00:35:27.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.812 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.812 00:35:27.812 Run status group 0 (all jobs): 00:35:27.812 READ: bw=4117KiB/s (4215kB/s), 66.9KiB/s-2046KiB/s (68.5kB/s-2095kB/s), io=4240KiB (4342kB), run=1001-1030msec 00:35:27.812 WRITE: bw=8676KiB/s (8884kB/s), 1988KiB/s-2665KiB/s (2036kB/s-2729kB/s), io=8936KiB (9150kB), run=1001-1030msec 00:35:27.812 00:35:27.812 Disk stats (read/write): 00:35:27.812 nvme0n1: ios=512/512, merge=0/0, ticks=614/320, in_queue=934, util=91.58% 00:35:27.812 nvme0n2: ios=444/512, merge=0/0, ticks=561/321, in_queue=882, util=91.62% 00:35:27.812 nvme0n3: ios=70/512, merge=0/0, ticks=1191/270, in_queue=1461, util=96.72% 00:35:27.812 nvme0n4: ios=51/512, merge=0/0, ticks=1401/169, in_queue=1570, util=97.97% 00:35:27.812 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:27.812 [global] 00:35:27.812 thread=1 00:35:27.812 invalidate=1 00:35:27.812 rw=write 00:35:27.812 time_based=1 00:35:27.812 runtime=1 00:35:27.812 ioengine=libaio 00:35:27.812 direct=1 00:35:27.812 bs=4096 00:35:27.812 iodepth=128 00:35:27.812 norandommap=0 00:35:27.812 numjobs=1 00:35:27.812 00:35:27.812 verify_dump=1 00:35:27.812 verify_backlog=512 00:35:27.812 verify_state_save=0 00:35:27.812 do_verify=1 00:35:27.812 verify=crc32c-intel 00:35:27.812 [job0] 00:35:27.812 filename=/dev/nvme0n1 00:35:27.812 [job1] 00:35:27.812 filename=/dev/nvme0n2 00:35:27.812 [job2] 00:35:27.812 filename=/dev/nvme0n3 00:35:27.812 [job3] 00:35:27.812 filename=/dev/nvme0n4 00:35:27.812 Could not set queue depth (nvme0n1) 00:35:27.812 Could not set queue depth (nvme0n2) 00:35:27.812 Could not set queue depth (nvme0n3) 00:35:27.812 Could not set queue depth (nvme0n4) 00:35:28.073 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:28.073 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:28.073 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:28.073 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:28.073 fio-3.35 00:35:28.073 Starting 4 threads 00:35:29.458 00:35:29.458 job0: (groupid=0, jobs=1): err= 0: pid=877730: Wed Nov 20 15:45:18 2024 00:35:29.458 read: IOPS=5204, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:35:29.458 slat (nsec): min=944, max=20370k, avg=74165.07, stdev=654159.31 00:35:29.458 clat (usec): min=1785, max=37671, avg=9800.26, stdev=4512.92 00:35:29.458 lat (usec): min=3355, max=37698, avg=9874.42, stdev=4569.38 00:35:29.458 clat percentiles (usec): 00:35:29.458 | 1.00th=[ 4146], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6325], 00:35:29.458 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 8455], 60.00th=[ 9241], 00:35:29.458 | 70.00th=[10945], 80.00th=[13304], 90.00th=[15664], 95.00th=[19530], 00:35:29.458 | 99.00th=[23725], 99.50th=[23725], 99.90th=[33424], 99.95th=[33817], 00:35:29.458 | 99.99th=[37487] 00:35:29.458 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:35:29.458 slat (nsec): min=1672, max=14428k, avg=95672.85, stdev=630168.25 00:35:29.458 clat (usec): min=636, max=93661, avg=13551.46, stdev=16009.94 00:35:29.458 lat (usec): min=640, max=93669, avg=13647.13, stdev=16114.44 00:35:29.458 clat percentiles (usec): 00:35:29.458 | 1.00th=[ 1434], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 5800], 00:35:29.458 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 9503], 00:35:29.458 | 70.00th=[12125], 80.00th=[15401], 90.00th=[25035], 95.00th=[55837], 00:35:29.458 | 99.00th=[83362], 99.50th=[88605], 99.90th=[93848], 99.95th=[93848], 00:35:29.458 | 99.99th=[93848] 00:35:29.458 bw ( KiB/s): min=12272, max=32768, per=24.01%, avg=22520.00, stdev=14492.86, samples=2 00:35:29.458 iops : min= 3068, max= 8192, avg=5630.00, stdev=3623.22, samples=2 00:35:29.458 lat (usec) : 750=0.09%, 1000=0.09% 00:35:29.458 lat (msec) : 2=0.72%, 4=1.79%, 10=59.62%, 20=28.40%, 50=6.44% 00:35:29.458 lat (msec) : 100=2.85% 00:35:29.458 cpu : usr=3.48%, sys=6.45%, ctx=452, majf=0, minf=1 00:35:29.458 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:29.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:29.458 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.458 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:29.458 job1: (groupid=0, jobs=1): err= 0: pid=877758: Wed Nov 20 15:45:18 2024 00:35:29.458 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.1MiB/1006msec) 00:35:29.458 slat (nsec): min=868, max=10611k, avg=60978.22, stdev=480013.61 00:35:29.458 clat (usec): min=1684, max=36827, avg=8279.93, stdev=3797.14 00:35:29.458 lat (usec): min=1725, max=36833, avg=8340.91, stdev=3833.55 00:35:29.458 clat percentiles (usec): 00:35:29.458 | 1.00th=[ 3228], 5.00th=[ 4424], 10.00th=[ 5473], 20.00th=[ 5932], 00:35:29.458 | 30.00th=[ 6259], 40.00th=[ 6652], 50.00th=[ 7373], 60.00th=[ 7767], 00:35:29.458 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[12387], 95.00th=[16712], 00:35:29.458 | 99.00th=[20841], 99.50th=[26608], 99.90th=[34866], 99.95th=[36963], 00:35:29.458 | 99.99th=[36963] 00:35:29.458 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:35:29.458 slat (nsec): min=1563, max=8710.4k, avg=62978.61, stdev=418224.51 00:35:29.458 clat (usec): min=714, max=80330, avg=8875.85, stdev=10053.86 00:35:29.458 lat (usec): min=722, max=80354, avg=8938.83, stdev=10116.01 00:35:29.458 clat percentiles (usec): 00:35:29.458 | 1.00th=[ 1450], 5.00th=[ 2606], 10.00th=[ 3523], 20.00th=[ 4359], 00:35:29.458 | 30.00th=[ 5342], 40.00th=[ 5997], 50.00th=[ 6652], 60.00th=[ 7177], 00:35:29.458 | 70.00th=[ 7635], 80.00th=[ 8455], 90.00th=[14091], 95.00th=[24511], 00:35:29.458 | 99.00th=[64750], 99.50th=[71828], 99.90th=[77071], 99.95th=[80217], 00:35:29.458 | 99.99th=[80217] 00:35:29.458 bw ( KiB/s): min=20480, max=40104, per=32.30%, avg=30292.00, stdev=13876.26, samples=2 00:35:29.458 iops : min= 5120, max=10026, avg=7573.00, stdev=3469.07, samples=2 00:35:29.458 lat (usec) : 750=0.02%, 1000=0.05% 00:35:29.458 lat (msec) : 2=1.37%, 4=7.15%, 10=74.58%, 20=12.83%, 50=3.20% 00:35:29.458 lat (msec) : 100=0.80% 00:35:29.458 cpu : usr=6.47%, sys=7.66%, ctx=563, majf=0, minf=2 00:35:29.458 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:29.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:29.458 issued rwts: total=7189,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.458 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:29.458 job2: (groupid=0, jobs=1): err= 0: pid=877788: Wed Nov 20 15:45:18 2024 00:35:29.458 read: IOPS=5633, BW=22.0MiB/s (23.1MB/s)(23.0MiB/1045msec) 00:35:29.458 slat (nsec): min=916, max=16191k, avg=77119.72, stdev=581594.91 00:35:29.458 clat (usec): min=4265, max=50004, avg=10701.43, stdev=6206.06 00:35:29.458 lat (usec): min=4273, max=53520, avg=10778.55, stdev=6229.66 00:35:29.458 clat percentiles (usec): 00:35:29.458 | 1.00th=[ 4752], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 8225], 00:35:29.458 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:35:29.458 | 70.00th=[10552], 80.00th=[11600], 90.00th=[13566], 95.00th=[17171], 00:35:29.458 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:35:29.458 | 99.99th=[50070] 00:35:29.458 write: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1045msec); 0 zone resets 00:35:29.458 slat (nsec): min=1585, max=10737k, avg=82112.16, stdev=595214.25 00:35:29.458 clat (usec): min=1277, max=91001, avg=10757.70, stdev=9506.11 00:35:29.458 lat (usec): min=1287, max=91008, avg=10839.81, stdev=9572.62 00:35:29.458 clat percentiles (usec): 00:35:29.458 | 1.00th=[ 2868], 5.00th=[ 4817], 10.00th=[ 5866], 20.00th=[ 7046], 00:35:29.458 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:35:29.458 | 70.00th=[10159], 80.00th=[11207], 90.00th=[14877], 95.00th=[21890], 00:35:29.458 | 99.00th=[77071], 99.50th=[85459], 99.90th=[90702], 99.95th=[90702], 00:35:29.458 | 99.99th=[90702] 00:35:29.458 bw ( KiB/s): min=20480, max=28672, per=26.20%, avg=24576.00, stdev=5792.62, samples=2 00:35:29.458 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:35:29.458 lat (msec) : 2=0.30%, 4=1.28%, 10=64.41%, 20=29.07%, 50=4.21% 00:35:29.458 lat (msec) : 100=0.72% 00:35:29.458 cpu : usr=4.41%, sys=6.32%, ctx=339, majf=0, minf=2 00:35:29.458 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:29.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:29.459 issued rwts: total=5887,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:29.459 job3: (groupid=0, jobs=1): err= 0: pid=877799: Wed Nov 20 15:45:18 2024 00:35:29.459 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:35:29.459 slat (nsec): min=1099, max=17929k, avg=86391.04, stdev=768403.42 00:35:29.459 clat (usec): min=2154, max=84894, avg=12543.75, stdev=9907.23 00:35:29.459 lat (usec): min=2156, max=84901, avg=12630.14, stdev=9987.55 00:35:29.459 clat percentiles (usec): 00:35:29.459 | 1.00th=[ 3097], 5.00th=[ 4555], 10.00th=[ 5407], 20.00th=[ 6980], 00:35:29.459 | 30.00th=[ 8094], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10028], 00:35:29.459 | 70.00th=[12125], 80.00th=[16450], 90.00th=[22414], 95.00th=[27395], 00:35:29.459 | 99.00th=[56361], 99.50th=[65799], 99.90th=[84411], 99.95th=[84411], 00:35:29.459 | 99.99th=[84411] 00:35:29.459 write: IOPS=4991, BW=19.5MiB/s (20.4MB/s)(19.7MiB/1011msec); 0 zone resets 00:35:29.459 slat (nsec): min=1691, max=12950k, avg=92374.08, stdev=597102.36 00:35:29.459 clat (usec): min=596, max=84868, avg=13931.43, stdev=12663.26 00:35:29.459 lat (usec): min=802, max=84883, avg=14023.80, stdev=12733.79 00:35:29.459 clat percentiles (usec): 00:35:29.459 | 1.00th=[ 1827], 5.00th=[ 4113], 10.00th=[ 5538], 20.00th=[ 6587], 00:35:29.459 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[11731], 00:35:29.459 | 70.00th=[14615], 80.00th=[17957], 90.00th=[27657], 95.00th=[32900], 00:35:29.459 | 99.00th=[79168], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:35:29.459 | 99.99th=[84411] 00:35:29.459 bw ( KiB/s): min=15976, max=23368, per=20.98%, avg=19672.00, stdev=5226.93, samples=2 00:35:29.459 iops : min= 3994, max= 5842, avg=4918.00, stdev=1306.73, samples=2 00:35:29.459 lat (usec) : 750=0.01%, 1000=0.18% 00:35:29.459 lat (msec) : 2=0.49%, 4=3.41%, 10=51.27%, 20=28.98%, 50=13.40% 00:35:29.459 lat (msec) : 100=2.26% 00:35:29.459 cpu : usr=3.56%, sys=5.64%, ctx=400, majf=0, minf=1 00:35:29.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:29.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:29.459 issued rwts: total=4608,5046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:29.459 00:35:29.459 Run status group 0 (all jobs): 00:35:29.459 READ: bw=85.7MiB/s (89.9MB/s), 17.8MiB/s-27.9MiB/s (18.7MB/s-29.3MB/s), io=89.6MiB (93.9MB), run=1006-1045msec 00:35:29.459 WRITE: bw=91.6MiB/s (96.0MB/s), 19.5MiB/s-29.8MiB/s (20.4MB/s-31.3MB/s), io=95.7MiB (100MB), run=1006-1045msec 00:35:29.459 00:35:29.459 Disk stats (read/write): 00:35:29.459 nvme0n1: ios=4770/5120, merge=0/0, ticks=42771/50261, in_queue=93032, util=96.49% 00:35:29.459 nvme0n2: ios=4783/5120, merge=0/0, ticks=33660/36487, in_queue=70147, util=93.37% 00:35:29.459 nvme0n3: ios=5935/6144, merge=0/0, ticks=35400/35294, in_queue=70694, util=92.19% 00:35:29.459 nvme0n4: ios=4154/4439, merge=0/0, ticks=37538/39488, in_queue=77026, util=98.41% 00:35:29.459 15:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:29.459 [global] 00:35:29.459 thread=1 00:35:29.459 invalidate=1 00:35:29.459 rw=randwrite 00:35:29.459 time_based=1 00:35:29.459 runtime=1 00:35:29.459 ioengine=libaio 00:35:29.459 direct=1 00:35:29.459 bs=4096 00:35:29.459 iodepth=128 00:35:29.459 norandommap=0 00:35:29.459 numjobs=1 00:35:29.459 00:35:29.459 verify_dump=1 00:35:29.459 verify_backlog=512 00:35:29.459 verify_state_save=0 00:35:29.459 do_verify=1 00:35:29.459 verify=crc32c-intel 00:35:29.459 [job0] 00:35:29.459 filename=/dev/nvme0n1 00:35:29.459 [job1] 00:35:29.459 filename=/dev/nvme0n2 00:35:29.459 [job2] 00:35:29.459 filename=/dev/nvme0n3 00:35:29.459 [job3] 00:35:29.459 filename=/dev/nvme0n4 00:35:29.459 Could not set queue depth (nvme0n1) 00:35:29.459 Could not set queue depth (nvme0n2) 00:35:29.459 Could not set queue depth (nvme0n3) 00:35:29.459 Could not set queue depth (nvme0n4) 00:35:29.720 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:29.720 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:29.720 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:29.720 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:29.720 fio-3.35 00:35:29.720 Starting 4 threads 00:35:31.103 00:35:31.103 job0: (groupid=0, jobs=1): err= 0: pid=878226: Wed Nov 20 15:45:19 2024 00:35:31.103 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:35:31.103 slat (nsec): min=937, max=10884k, avg=70025.50, stdev=583443.17 00:35:31.103 clat (usec): min=1438, max=76851, avg=10468.42, stdev=7546.34 00:35:31.103 lat (usec): min=1465, max=76856, avg=10538.45, stdev=7589.15 00:35:31.103 clat percentiles (usec): 00:35:31.103 | 1.00th=[ 1811], 5.00th=[ 2311], 10.00th=[ 3752], 20.00th=[ 5211], 00:35:31.104 | 30.00th=[ 6063], 40.00th=[ 6980], 50.00th=[ 8029], 60.00th=[10028], 00:35:31.104 | 70.00th=[13173], 80.00th=[15795], 90.00th=[19006], 95.00th=[21103], 00:35:31.104 | 99.00th=[30802], 99.50th=[51119], 99.90th=[77071], 99.95th=[77071], 00:35:31.104 | 99.99th=[77071] 00:35:31.104 write: IOPS=7450, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1005msec); 0 zone resets 00:35:31.104 slat (nsec): min=1565, max=8363.6k, avg=51386.89, stdev=417029.15 00:35:31.104 clat (usec): min=389, max=82380, avg=7715.82, stdev=7939.84 00:35:31.104 lat (usec): min=440, max=82390, avg=7767.21, stdev=7962.62 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 873], 5.00th=[ 1352], 10.00th=[ 1827], 20.00th=[ 2507], 00:35:31.104 | 30.00th=[ 4293], 40.00th=[ 5276], 50.00th=[ 5669], 60.00th=[ 6915], 00:35:31.104 | 70.00th=[ 8717], 80.00th=[11076], 90.00th=[14746], 95.00th=[15664], 00:35:31.104 | 99.00th=[40109], 99.50th=[67634], 99.90th=[77071], 99.95th=[82314], 00:35:31.104 | 99.99th=[82314] 00:35:31.104 bw ( KiB/s): min=18176, max=40704, per=42.25%, avg=29440.00, stdev=15929.70, samples=2 00:35:31.104 iops : min= 4544, max=10176, avg=7360.00, stdev=3982.43, samples=2 00:35:31.104 lat (usec) : 500=0.01%, 750=0.28%, 1000=0.62% 00:35:31.104 lat (msec) : 2=6.87%, 4=12.52%, 10=47.11%, 20=27.35%, 50=4.52% 00:35:31.104 lat (msec) : 100=0.72% 00:35:31.104 cpu : usr=5.68%, sys=8.07%, ctx=501, majf=0, minf=1 00:35:31.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:31.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:31.104 issued rwts: total=6656,7488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:31.104 job1: (groupid=0, jobs=1): err= 0: pid=878233: Wed Nov 20 15:45:19 2024 00:35:31.104 read: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec) 00:35:31.104 slat (nsec): min=1355, max=20639k, avg=151815.76, stdev=1047033.56 00:35:31.104 clat (usec): min=1614, max=70159, avg=17659.33, stdev=7191.58 00:35:31.104 lat (usec): min=8226, max=70165, avg=17811.14, stdev=7297.02 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 8455], 5.00th=[10814], 10.00th=[11469], 20.00th=[12518], 00:35:31.104 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15795], 60.00th=[16909], 00:35:31.104 | 70.00th=[19792], 80.00th=[22676], 90.00th=[23725], 95.00th=[30802], 00:35:31.104 | 99.00th=[42730], 99.50th=[56886], 99.90th=[69731], 99.95th=[69731], 00:35:31.104 | 99.99th=[69731] 00:35:31.104 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:35:31.104 slat (nsec): min=1559, max=17376k, avg=171493.03, stdev=931472.10 00:35:31.104 clat (usec): min=1108, max=92960, avg=24185.51, stdev=17954.57 00:35:31.104 lat (usec): min=1118, max=92970, avg=24357.00, stdev=18081.03 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 5604], 5.00th=[ 6718], 10.00th=[ 8717], 20.00th=[10290], 00:35:31.104 | 30.00th=[12387], 40.00th=[13566], 50.00th=[16057], 60.00th=[21627], 00:35:31.104 | 70.00th=[28967], 80.00th=[39060], 90.00th=[46400], 95.00th=[60556], 00:35:31.104 | 99.00th=[85459], 99.50th=[86508], 99.90th=[92799], 99.95th=[92799], 00:35:31.104 | 99.99th=[92799] 00:35:31.104 bw ( KiB/s): min=12280, max=12296, per=17.63%, avg=12288.00, stdev=11.31, samples=2 00:35:31.104 iops : min= 3070, max= 3074, avg=3072.00, stdev= 2.83, samples=2 00:35:31.104 lat (msec) : 2=0.05%, 10=9.89%, 20=55.71%, 50=29.43%, 100=4.93% 00:35:31.104 cpu : usr=2.09%, sys=3.98%, ctx=237, majf=0, minf=1 00:35:31.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:35:31.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:31.104 issued rwts: total=3018,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:31.104 job2: (groupid=0, jobs=1): err= 0: pid=878247: Wed Nov 20 15:45:19 2024 00:35:31.104 read: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec) 00:35:31.104 slat (nsec): min=994, max=13127k, avg=97407.94, stdev=736736.28 00:35:31.104 clat (usec): min=2314, max=52393, avg=11808.31, stdev=7269.65 00:35:31.104 lat (usec): min=2319, max=52411, avg=11905.72, stdev=7335.48 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 2737], 5.00th=[ 3556], 10.00th=[ 4080], 20.00th=[ 6783], 00:35:31.104 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11994], 00:35:31.104 | 70.00th=[13566], 80.00th=[14746], 90.00th=[17957], 95.00th=[25822], 00:35:31.104 | 99.00th=[41681], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:35:31.104 | 99.99th=[52167] 00:35:31.104 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:35:31.104 slat (nsec): min=1609, max=18175k, avg=135020.80, stdev=743728.58 00:35:31.104 clat (usec): min=1238, max=72794, avg=19467.66, stdev=17152.71 00:35:31.104 lat (usec): min=1249, max=77104, avg=19602.68, stdev=17272.76 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 2245], 5.00th=[ 4178], 10.00th=[ 5997], 20.00th=[ 6587], 00:35:31.104 | 30.00th=[ 7570], 40.00th=[ 9110], 50.00th=[11076], 60.00th=[13042], 00:35:31.104 | 70.00th=[25035], 80.00th=[37487], 90.00th=[45876], 95.00th=[53740], 00:35:31.104 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:35:31.104 | 99.99th=[72877] 00:35:31.104 bw ( KiB/s): min=14344, max=18424, per=23.51%, avg=16384.00, stdev=2885.00, samples=2 00:35:31.104 iops : min= 3586, max= 4606, avg=4096.00, stdev=721.25, samples=2 00:35:31.104 lat (msec) : 2=0.36%, 4=6.55%, 10=37.04%, 20=36.16%, 50=16.11% 00:35:31.104 lat (msec) : 100=3.78% 00:35:31.104 cpu : usr=3.19%, sys=4.69%, ctx=402, majf=0, minf=1 00:35:31.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:31.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:31.104 issued rwts: total=4006,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:31.104 job3: (groupid=0, jobs=1): err= 0: pid=878253: Wed Nov 20 15:45:19 2024 00:35:31.104 read: IOPS=2958, BW=11.6MiB/s (12.1MB/s)(12.1MiB/1047msec) 00:35:31.104 slat (nsec): min=1100, max=13787k, avg=141559.63, stdev=960814.40 00:35:31.104 clat (usec): min=6800, max=48217, avg=18105.41, stdev=5434.57 00:35:31.104 lat (usec): min=6804, max=48226, avg=18246.97, stdev=5496.31 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[12649], 20.00th=[14877], 00:35:31.104 | 30.00th=[15533], 40.00th=[16057], 50.00th=[17171], 60.00th=[18220], 00:35:31.104 | 70.00th=[19268], 80.00th=[21365], 90.00th=[24249], 95.00th=[26346], 00:35:31.104 | 99.00th=[40109], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:35:31.104 | 99.99th=[47973] 00:35:31.104 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1047msec); 0 zone resets 00:35:31.104 slat (nsec): min=1569, max=21128k, avg=152163.15, stdev=1009229.42 00:35:31.104 clat (usec): min=5333, max=56298, avg=21024.11, stdev=10649.99 00:35:31.104 lat (usec): min=5340, max=63064, avg=21176.28, stdev=10731.35 00:35:31.104 clat percentiles (usec): 00:35:31.104 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[11994], 20.00th=[12649], 00:35:31.104 | 30.00th=[14222], 40.00th=[15139], 50.00th=[15926], 60.00th=[19530], 00:35:31.104 | 70.00th=[23987], 80.00th=[28705], 90.00th=[39584], 95.00th=[42206], 00:35:31.104 | 99.00th=[55837], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:35:31.104 | 99.99th=[56361] 00:35:31.104 bw ( KiB/s): min=12240, max=15616, per=19.99%, avg=13928.00, stdev=2387.19, samples=2 00:35:31.104 iops : min= 3060, max= 3904, avg=3482.00, stdev=596.80, samples=2 00:35:31.104 lat (msec) : 10=2.68%, 20=65.61%, 50=30.77%, 100=0.94% 00:35:31.104 cpu : usr=2.39%, sys=3.63%, ctx=202, majf=0, minf=2 00:35:31.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:31.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:31.104 issued rwts: total=3098,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:31.104 00:35:31.104 Run status group 0 (all jobs): 00:35:31.104 READ: bw=62.6MiB/s (65.6MB/s), 11.6MiB/s-25.9MiB/s (12.1MB/s-27.1MB/s), io=65.5MiB (68.7MB), run=1003-1047msec 00:35:31.104 WRITE: bw=68.1MiB/s (71.4MB/s), 11.9MiB/s-29.1MiB/s (12.5MB/s-30.5MB/s), io=71.2MiB (74.7MB), run=1003-1047msec 00:35:31.104 00:35:31.104 Disk stats (read/write): 00:35:31.104 nvme0n1: ios=6037/6656, merge=0/0, ticks=41112/31529, in_queue=72641, util=98.30% 00:35:31.104 nvme0n2: ios=2609/2735, merge=0/0, ticks=21910/29968, in_queue=51878, util=89.29% 00:35:31.104 nvme0n3: ios=2616/2613, merge=0/0, ticks=27729/52295, in_queue=80024, util=96.41% 00:35:31.104 nvme0n4: ios=2617/2935, merge=0/0, ticks=22455/27372, in_queue=49827, util=94.02% 00:35:31.104 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:31.104 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=878402 00:35:31.104 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:31.104 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:31.104 [global] 00:35:31.104 thread=1 00:35:31.104 invalidate=1 00:35:31.104 rw=read 00:35:31.104 time_based=1 00:35:31.104 runtime=10 00:35:31.104 ioengine=libaio 00:35:31.104 direct=1 00:35:31.104 bs=4096 00:35:31.104 iodepth=1 00:35:31.104 norandommap=1 00:35:31.104 numjobs=1 00:35:31.104 00:35:31.104 [job0] 00:35:31.104 filename=/dev/nvme0n1 00:35:31.104 [job1] 00:35:31.104 filename=/dev/nvme0n2 00:35:31.104 [job2] 00:35:31.104 filename=/dev/nvme0n3 00:35:31.104 [job3] 00:35:31.104 filename=/dev/nvme0n4 00:35:31.386 Could not set queue depth (nvme0n1) 00:35:31.386 Could not set queue depth (nvme0n2) 00:35:31.386 Could not set queue depth (nvme0n3) 00:35:31.386 Could not set queue depth (nvme0n4) 00:35:31.652 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.652 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.652 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.652 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.652 fio-3.35 00:35:31.652 Starting 4 threads 00:35:34.197 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:34.197 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12550144, buflen=4096 00:35:34.197 fio: pid=878755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:34.457 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:34.457 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=950272, buflen=4096 00:35:34.457 fio: pid=878744, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:34.457 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:34.457 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:34.717 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:34.717 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:34.718 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1589248, buflen=4096 00:35:34.718 fio: pid=878691, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:34.978 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7770112, buflen=4096 00:35:34.978 fio: pid=878714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:34.978 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:34.978 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:34.978 00:35:34.978 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=878691: Wed Nov 20 15:45:23 2024 00:35:34.978 read: IOPS=133, BW=534KiB/s (547kB/s)(1552KiB/2907msec) 00:35:34.978 slat (usec): min=7, max=14617, avg=65.76, stdev=739.72 00:35:34.978 clat (usec): min=467, max=42172, avg=7361.42, stdev=14818.36 00:35:34.978 lat (usec): min=506, max=56013, avg=7427.27, stdev=14923.16 00:35:34.978 clat percentiles (usec): 00:35:34.978 | 1.00th=[ 685], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 898], 00:35:34.978 | 30.00th=[ 930], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1029], 00:35:34.978 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[41681], 95.00th=[41681], 00:35:34.978 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:34.978 | 99.99th=[42206] 00:35:34.978 bw ( KiB/s): min= 96, max= 2648, per=8.34%, avg=606.40, stdev=1141.29, samples=5 00:35:34.978 iops : min= 24, max= 662, avg=151.60, stdev=285.32, samples=5 00:35:34.978 lat (usec) : 500=0.26%, 750=2.83%, 1000=48.07% 00:35:34.978 lat (msec) : 2=32.90%, 50=15.68% 00:35:34.978 cpu : usr=0.28%, sys=0.52%, ctx=393, majf=0, minf=2 00:35:34.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.978 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.978 issued rwts: total=389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.978 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=878714: Wed Nov 20 15:45:23 2024 00:35:34.978 read: IOPS=617, BW=2470KiB/s (2529kB/s)(7588KiB/3072msec) 00:35:34.978 slat (usec): min=7, max=9979, avg=41.52, stdev=349.21 00:35:34.978 clat (usec): min=486, max=42149, avg=1558.38, stdev=4138.81 00:35:34.978 lat (usec): min=511, max=42175, avg=1599.91, stdev=4157.73 00:35:34.978 clat percentiles (usec): 00:35:34.978 | 1.00th=[ 537], 5.00th=[ 889], 10.00th=[ 971], 20.00th=[ 1045], 00:35:34.978 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:35:34.978 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:35:34.978 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:34.978 | 99.99th=[42206] 00:35:34.978 bw ( KiB/s): min= 368, max= 3384, per=34.40%, avg=2500.67, stdev=1191.95, samples=6 00:35:34.978 iops : min= 92, max= 846, avg=625.17, stdev=297.99, samples=6 00:35:34.978 lat (usec) : 500=0.37%, 750=1.90%, 1000=11.12% 00:35:34.978 lat (msec) : 2=85.30%, 10=0.21%, 50=1.05% 00:35:34.978 cpu : usr=0.65%, sys=1.89%, ctx=1902, majf=0, minf=1 00:35:34.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.979 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.979 issued rwts: total=1898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.979 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=878744: Wed Nov 20 15:45:23 2024 00:35:34.979 read: IOPS=86, BW=343KiB/s (351kB/s)(928KiB/2708msec) 00:35:34.979 slat (nsec): min=3623, max=67829, avg=21628.81, stdev=9221.26 00:35:34.979 clat (usec): min=397, max=41953, avg=11552.27, stdev=17847.66 00:35:34.979 lat (usec): min=423, max=41961, avg=11573.95, stdev=17840.90 00:35:34.979 clat percentiles (usec): 00:35:34.979 | 1.00th=[ 429], 5.00th=[ 562], 10.00th=[ 668], 20.00th=[ 742], 00:35:34.979 | 30.00th=[ 791], 40.00th=[ 832], 50.00th=[ 865], 60.00th=[ 906], 00:35:34.979 | 70.00th=[ 971], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:34.979 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:34.979 | 99.99th=[42206] 00:35:34.979 bw ( KiB/s): min= 96, max= 1424, per=5.00%, avg=363.20, stdev=593.02, samples=5 00:35:34.979 iops : min= 24, max= 356, avg=90.80, stdev=148.25, samples=5 00:35:34.979 lat (usec) : 500=2.15%, 750=20.17%, 1000=49.79% 00:35:34.979 lat (msec) : 2=0.86%, 50=26.61% 00:35:34.979 cpu : usr=0.04%, sys=0.22%, ctx=234, majf=0, minf=2 00:35:34.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.979 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.979 issued rwts: total=233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.979 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=878755: Wed Nov 20 15:45:23 2024 00:35:34.979 read: IOPS=1214, BW=4858KiB/s (4974kB/s)(12.0MiB/2523msec) 00:35:34.979 slat (nsec): min=6703, max=73592, avg=24675.28, stdev=6518.03 00:35:34.979 clat (usec): min=296, max=1062, avg=785.74, stdev=108.50 00:35:34.979 lat (usec): min=322, max=1091, avg=810.42, stdev=110.15 00:35:34.979 clat percentiles (usec): 00:35:34.979 | 1.00th=[ 465], 5.00th=[ 586], 10.00th=[ 644], 20.00th=[ 709], 00:35:34.979 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 824], 00:35:34.979 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 938], 00:35:34.979 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1037], 99.95th=[ 1057], 00:35:34.979 | 99.99th=[ 1057] 00:35:34.979 bw ( KiB/s): min= 4816, max= 4960, per=67.46%, avg=4902.40, stdev=62.84, samples=5 00:35:34.979 iops : min= 1204, max= 1240, avg=1225.60, stdev=15.71, samples=5 00:35:34.979 lat (usec) : 500=1.92%, 750=29.56%, 1000=67.83% 00:35:34.979 lat (msec) : 2=0.65% 00:35:34.979 cpu : usr=1.47%, sys=3.29%, ctx=3067, majf=0, minf=2 00:35:34.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.979 issued rwts: total=3065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.979 00:35:34.979 Run status group 0 (all jobs): 00:35:34.979 READ: bw=7267KiB/s (7441kB/s), 343KiB/s-4858KiB/s (351kB/s-4974kB/s), io=21.8MiB (22.9MB), run=2523-3072msec 00:35:34.979 00:35:34.979 Disk stats (read/write): 00:35:34.979 nvme0n1: ios=418/0, merge=0/0, ticks=3247/0, in_queue=3247, util=99.13% 00:35:34.979 nvme0n2: ios=1893/0, merge=0/0, ticks=2822/0, in_queue=2822, util=93.59% 00:35:34.979 nvme0n3: ios=228/0, merge=0/0, ticks=2517/0, in_queue=2517, util=95.55% 00:35:34.979 nvme0n4: ios=2793/0, merge=0/0, ticks=2109/0, in_queue=2109, util=95.92% 00:35:34.979 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:34.979 15:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:35.240 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:35.240 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:35.500 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:35.500 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:35.500 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:35.500 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 878402 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:35.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:35.762 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:36.022 nvmf hotplug test: fio failed as expected 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.022 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.022 rmmod nvme_tcp 00:35:36.022 rmmod nvme_fabrics 00:35:36.022 rmmod nvme_keyring 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 874770 ']' 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 874770 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 874770 ']' 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 874770 00:35:36.283 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874770 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874770' 00:35:36.283 killing process with pid 874770 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 874770 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 874770 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.283 15:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.829 00:35:38.829 real 0m28.533s 00:35:38.829 user 2m19.667s 00:35:38.829 sys 0m12.231s 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:38.829 ************************************ 00:35:38.829 END TEST nvmf_fio_target 00:35:38.829 ************************************ 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:38.829 ************************************ 00:35:38.829 START TEST nvmf_bdevio 00:35:38.829 ************************************ 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:38.829 * Looking for test storage... 00:35:38.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.829 --rc genhtml_branch_coverage=1 00:35:38.829 --rc genhtml_function_coverage=1 00:35:38.829 --rc genhtml_legend=1 00:35:38.829 --rc geninfo_all_blocks=1 00:35:38.829 --rc geninfo_unexecuted_blocks=1 00:35:38.829 00:35:38.829 ' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.829 --rc genhtml_branch_coverage=1 00:35:38.829 --rc genhtml_function_coverage=1 00:35:38.829 --rc genhtml_legend=1 00:35:38.829 --rc geninfo_all_blocks=1 00:35:38.829 --rc geninfo_unexecuted_blocks=1 00:35:38.829 00:35:38.829 ' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.829 --rc genhtml_branch_coverage=1 00:35:38.829 --rc genhtml_function_coverage=1 00:35:38.829 --rc genhtml_legend=1 00:35:38.829 --rc geninfo_all_blocks=1 00:35:38.829 --rc geninfo_unexecuted_blocks=1 00:35:38.829 00:35:38.829 ' 00:35:38.829 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.829 --rc genhtml_branch_coverage=1 00:35:38.829 --rc genhtml_function_coverage=1 00:35:38.829 --rc genhtml_legend=1 00:35:38.830 --rc geninfo_all_blocks=1 00:35:38.830 --rc geninfo_unexecuted_blocks=1 00:35:38.830 00:35:38.830 ' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.830 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.974 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:46.975 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:46.975 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:46.975 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:46.975 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.975 15:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:35:46.976 00:35:46.976 --- 10.0.0.2 ping statistics --- 00:35:46.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.976 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:35:46.976 00:35:46.976 --- 10.0.0.1 ping statistics --- 00:35:46.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.976 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=883720 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 883720 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 883720 ']' 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.976 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.976 [2024-11-20 15:45:35.139492] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:46.976 [2024-11-20 15:45:35.140611] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:35:46.976 [2024-11-20 15:45:35.140662] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.976 [2024-11-20 15:45:35.239382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.976 [2024-11-20 15:45:35.292528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.976 [2024-11-20 15:45:35.292580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.976 [2024-11-20 15:45:35.292589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.976 [2024-11-20 15:45:35.292596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.976 [2024-11-20 15:45:35.292602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.976 [2024-11-20 15:45:35.294610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:46.976 [2024-11-20 15:45:35.294772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:46.976 [2024-11-20 15:45:35.294934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:46.976 [2024-11-20 15:45:35.294935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.976 [2024-11-20 15:45:35.372859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:46.976 [2024-11-20 15:45:35.373142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:46.976 [2024-11-20 15:45:35.373941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:46.976 [2024-11-20 15:45:35.374237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:46.976 [2024-11-20 15:45:35.374290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:47.238 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.238 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:47.238 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.238 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.239 15:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 [2024-11-20 15:45:36.015931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 Malloc0 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 [2024-11-20 15:45:36.108306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:47.239 { 00:35:47.239 "params": { 00:35:47.239 "name": "Nvme$subsystem", 00:35:47.239 "trtype": "$TEST_TRANSPORT", 00:35:47.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.239 "adrfam": "ipv4", 00:35:47.239 "trsvcid": "$NVMF_PORT", 00:35:47.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.239 "hdgst": ${hdgst:-false}, 00:35:47.239 "ddgst": ${ddgst:-false} 00:35:47.239 }, 00:35:47.239 "method": "bdev_nvme_attach_controller" 00:35:47.239 } 00:35:47.239 EOF 00:35:47.239 )") 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:47.239 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:47.239 "params": { 00:35:47.239 "name": "Nvme1", 00:35:47.239 "trtype": "tcp", 00:35:47.239 "traddr": "10.0.0.2", 00:35:47.239 "adrfam": "ipv4", 00:35:47.239 "trsvcid": "4420", 00:35:47.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:47.239 "hdgst": false, 00:35:47.239 "ddgst": false 00:35:47.239 }, 00:35:47.239 "method": "bdev_nvme_attach_controller" 00:35:47.239 }' 00:35:47.239 [2024-11-20 15:45:36.175242] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:35:47.239 [2024-11-20 15:45:36.175317] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883955 ] 00:35:47.501 [2024-11-20 15:45:36.268821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:47.501 [2024-11-20 15:45:36.324885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.501 [2024-11-20 15:45:36.325042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.501 [2024-11-20 15:45:36.325042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:47.762 I/O targets: 00:35:47.763 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:47.763 00:35:47.763 00:35:47.763 CUnit - A unit testing framework for C - Version 2.1-3 00:35:47.763 http://cunit.sourceforge.net/ 00:35:47.763 00:35:47.763 00:35:47.763 Suite: bdevio tests on: Nvme1n1 00:35:47.763 Test: blockdev write read block ...passed 00:35:47.763 Test: blockdev write zeroes read block ...passed 00:35:47.763 Test: blockdev write zeroes read no split ...passed 00:35:47.763 Test: blockdev write zeroes read split ...passed 00:35:47.763 Test: blockdev write zeroes read split partial ...passed 00:35:47.763 Test: blockdev reset ...[2024-11-20 15:45:36.690923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:47.763 [2024-11-20 15:45:36.691032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6970 (9): Bad file descriptor 00:35:48.023 [2024-11-20 15:45:36.785923] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:48.023 passed 00:35:48.023 Test: blockdev write read 8 blocks ...passed 00:35:48.023 Test: blockdev write read size > 128k ...passed 00:35:48.023 Test: blockdev write read invalid size ...passed 00:35:48.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.023 Test: blockdev write read max offset ...passed 00:35:48.023 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.023 Test: blockdev writev readv 8 blocks ...passed 00:35:48.284 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.284 Test: blockdev writev readv block ...passed 00:35:48.284 Test: blockdev writev readv size > 128k ...passed 00:35:48.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.284 Test: blockdev comparev and writev ...[2024-11-20 15:45:37.086812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.086865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.086882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.086891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.087306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.087321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.087336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.087344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.087852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.087865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.087880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.087889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.088294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.088309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.088323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:48.284 [2024-11-20 15:45:37.088332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:48.284 passed 00:35:48.284 Test: blockdev nvme passthru rw ...passed 00:35:48.284 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:45:37.171661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:48.284 [2024-11-20 15:45:37.171686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.171955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:48.284 [2024-11-20 15:45:37.171967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.172259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:48.284 [2024-11-20 15:45:37.172274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:48.284 [2024-11-20 15:45:37.172550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:48.284 [2024-11-20 15:45:37.172563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:48.284 passed 00:35:48.284 Test: blockdev nvme admin passthru ...passed 00:35:48.284 Test: blockdev copy ...passed 00:35:48.284 00:35:48.284 Run Summary: Type Total Ran Passed Failed Inactive 00:35:48.284 suites 1 1 n/a 0 0 00:35:48.284 tests 23 23 23 0 0 00:35:48.284 asserts 152 152 152 0 n/a 00:35:48.284 00:35:48.284 Elapsed time = 1.419 seconds 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:48.545 rmmod nvme_tcp 00:35:48.545 rmmod nvme_fabrics 00:35:48.545 rmmod nvme_keyring 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 883720 ']' 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 883720 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 883720 ']' 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 883720 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.545 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883720 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883720' 00:35:48.806 killing process with pid 883720 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 883720 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 883720 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:48.806 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.349 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:51.349 00:35:51.349 real 0m12.456s 00:35:51.349 user 0m10.578s 00:35:51.349 sys 0m6.462s 00:35:51.349 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.349 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.349 ************************************ 00:35:51.349 END TEST nvmf_bdevio 00:35:51.349 ************************************ 00:35:51.349 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:51.349 00:35:51.349 real 5m2.219s 00:35:51.349 user 10m22.181s 00:35:51.349 sys 2m5.303s 00:35:51.349 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.349 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:51.349 ************************************ 00:35:51.349 END TEST nvmf_target_core_interrupt_mode 00:35:51.349 ************************************ 00:35:51.349 15:45:39 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:51.349 15:45:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:51.349 15:45:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:51.349 15:45:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:51.349 ************************************ 00:35:51.349 START TEST nvmf_interrupt 00:35:51.349 ************************************ 00:35:51.349 15:45:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:51.349 * Looking for test storage... 00:35:51.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:51.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.349 --rc genhtml_branch_coverage=1 00:35:51.349 --rc genhtml_function_coverage=1 00:35:51.349 --rc genhtml_legend=1 00:35:51.349 --rc geninfo_all_blocks=1 00:35:51.349 --rc geninfo_unexecuted_blocks=1 00:35:51.349 00:35:51.349 ' 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:51.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.349 --rc genhtml_branch_coverage=1 00:35:51.349 --rc genhtml_function_coverage=1 00:35:51.349 --rc genhtml_legend=1 00:35:51.349 --rc geninfo_all_blocks=1 00:35:51.349 --rc geninfo_unexecuted_blocks=1 00:35:51.349 00:35:51.349 ' 00:35:51.349 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:51.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.349 --rc genhtml_branch_coverage=1 00:35:51.349 --rc genhtml_function_coverage=1 00:35:51.349 --rc genhtml_legend=1 00:35:51.350 --rc geninfo_all_blocks=1 00:35:51.350 --rc geninfo_unexecuted_blocks=1 00:35:51.350 00:35:51.350 ' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:51.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.350 --rc genhtml_branch_coverage=1 00:35:51.350 --rc genhtml_function_coverage=1 00:35:51.350 --rc genhtml_legend=1 00:35:51.350 --rc geninfo_all_blocks=1 00:35:51.350 --rc geninfo_unexecuted_blocks=1 00:35:51.350 00:35:51.350 ' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:51.350 15:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:59.487 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:59.487 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:59.487 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:59.487 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:59.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:35:59.487 00:35:59.487 --- 10.0.0.2 ping statistics --- 00:35:59.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.487 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:35:59.487 00:35:59.487 --- 10.0.0.1 ping statistics --- 00:35:59.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.487 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=888310 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 888310 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 888310 ']' 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.487 15:45:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.487 [2024-11-20 15:45:47.721885] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:59.487 [2024-11-20 15:45:47.723082] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:35:59.487 [2024-11-20 15:45:47.723136] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.487 [2024-11-20 15:45:47.825127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:59.487 [2024-11-20 15:45:47.876867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.487 [2024-11-20 15:45:47.876919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.488 [2024-11-20 15:45:47.876928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.488 [2024-11-20 15:45:47.876935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.488 [2024-11-20 15:45:47.876942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.488 [2024-11-20 15:45:47.878772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.488 [2024-11-20 15:45:47.878777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.488 [2024-11-20 15:45:47.957647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:59.488 [2024-11-20 15:45:47.958312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:59.488 [2024-11-20 15:45:47.958548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:59.747 5000+0 records in 00:35:59.747 5000+0 records out 00:35:59.747 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0181151 s, 565 MB/s 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.747 AIO0 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.747 [2024-11-20 15:45:48.635829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.747 [2024-11-20 15:45:48.680274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 888310 0 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 888310 0 idle 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.747 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.748 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.748 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.748 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:35:59.748 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888310 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0' 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888310 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 888310 1 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 888310 1 idle 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:00.008 15:45:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888344 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888344 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=888678 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 888310 0 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 888310 0 busy 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:00.269 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888310 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.49 reactor_0' 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888310 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.49 reactor_0 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 888310 1 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 888310 1 busy 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888344 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1' 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888344 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:00.531 15:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 888678 00:36:10.576 Initializing NVMe Controllers 00:36:10.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.576 Controller IO queue size 256, less than required. 00:36:10.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:10.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:10.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:10.576 Initialization complete. Launching workers. 00:36:10.576 ======================================================== 00:36:10.576 Latency(us) 00:36:10.576 Device Information : IOPS MiB/s Average min max 00:36:10.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20101.78 78.52 12739.69 3824.48 29901.62 00:36:10.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19081.58 74.54 13418.07 7970.97 51515.02 00:36:10.576 ======================================================== 00:36:10.576 Total : 39183.36 153.06 13070.05 3824.48 51515.02 00:36:10.576 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 888310 0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 888310 0 idle 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888310 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888310 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 888310 1 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 888310 1 idle 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:10.576 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888344 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888344 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.838 15:45:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:11.410 15:46:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:11.410 15:46:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:11.410 15:46:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:11.410 15:46:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:11.410 15:46:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:13.320 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 888310 0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 888310 0 idle 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888310 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0' 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888310 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 888310 1 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 888310 1 idle 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=888310 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 888310 -w 256 00:36:13.581 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 888344 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 888344 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:13.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:13.840 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:14.099 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:14.100 rmmod nvme_tcp 00:36:14.100 rmmod nvme_fabrics 00:36:14.100 rmmod nvme_keyring 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 888310 ']' 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 888310 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 888310 ']' 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 888310 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888310 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888310' 00:36:14.100 killing process with pid 888310 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 888310 00:36:14.100 15:46:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 888310 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:14.360 15:46:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.274 15:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.274 00:36:16.274 real 0m25.262s 00:36:16.274 user 0m40.429s 00:36:16.274 sys 0m9.675s 00:36:16.274 15:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.274 15:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:16.274 ************************************ 00:36:16.274 END TEST nvmf_interrupt 00:36:16.274 ************************************ 00:36:16.274 00:36:16.274 real 30m10.131s 00:36:16.274 user 61m48.334s 00:36:16.274 sys 10m19.761s 00:36:16.274 15:46:05 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.274 15:46:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.274 ************************************ 00:36:16.274 END TEST nvmf_tcp 00:36:16.274 ************************************ 00:36:16.535 15:46:05 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:16.535 15:46:05 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:16.535 15:46:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:16.535 15:46:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.535 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:36:16.535 ************************************ 00:36:16.535 START TEST spdkcli_nvmf_tcp 00:36:16.535 ************************************ 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:16.535 * Looking for test storage... 00:36:16.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:16.535 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.536 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.536 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:16.536 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:16.536 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.797 --rc genhtml_branch_coverage=1 00:36:16.797 --rc genhtml_function_coverage=1 00:36:16.797 --rc genhtml_legend=1 00:36:16.797 --rc geninfo_all_blocks=1 00:36:16.797 --rc geninfo_unexecuted_blocks=1 00:36:16.797 00:36:16.797 ' 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.797 --rc genhtml_branch_coverage=1 00:36:16.797 --rc genhtml_function_coverage=1 00:36:16.797 --rc genhtml_legend=1 00:36:16.797 --rc geninfo_all_blocks=1 00:36:16.797 --rc geninfo_unexecuted_blocks=1 00:36:16.797 00:36:16.797 ' 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.797 --rc genhtml_branch_coverage=1 00:36:16.797 --rc genhtml_function_coverage=1 00:36:16.797 --rc genhtml_legend=1 00:36:16.797 --rc geninfo_all_blocks=1 00:36:16.797 --rc geninfo_unexecuted_blocks=1 00:36:16.797 00:36:16.797 ' 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.797 --rc genhtml_branch_coverage=1 00:36:16.797 --rc genhtml_function_coverage=1 00:36:16.797 --rc genhtml_legend=1 00:36:16.797 --rc geninfo_all_blocks=1 00:36:16.797 --rc geninfo_unexecuted_blocks=1 00:36:16.797 00:36:16.797 ' 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.797 15:46:05 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=891854 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 891854 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 891854 ']' 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:16.798 15:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.798 [2024-11-20 15:46:05.612451] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:36:16.798 [2024-11-20 15:46:05.612527] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891854 ] 00:36:16.798 [2024-11-20 15:46:05.702719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:17.059 [2024-11-20 15:46:05.759206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.059 [2024-11-20 15:46:05.759272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.632 15:46:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:17.632 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:17.632 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:17.632 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:17.632 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:17.632 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:17.632 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:17.632 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:17.632 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:17.632 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:17.632 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:17.632 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:17.632 ' 00:36:20.960 [2024-11-20 15:46:09.235327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.900 [2024-11-20 15:46:10.599535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:24.443 [2024-11-20 15:46:13.122552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:26.991 [2024-11-20 15:46:15.356917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:28.378 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:28.378 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:28.378 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:28.378 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:28.378 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:28.378 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:28.378 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:28.378 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:28.378 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:28.378 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:28.378 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:28.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:28.379 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:28.379 15:46:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:28.640 15:46:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.900 15:46:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:28.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:28.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:28.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:28.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:28.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:28.900 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:28.900 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:28.900 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:28.900 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:28.900 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:28.900 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:28.900 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:28.900 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:28.900 ' 00:36:35.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:35.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:35.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:35.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:35.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:35.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:35.489 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:35.489 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:35.489 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:35.489 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:35.489 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:35.489 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:35.489 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:35.489 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 891854 ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 891854' 00:36:35.489 killing process with pid 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 891854 ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 891854 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 891854 ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 891854 00:36:35.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (891854) - No such process 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 891854 is not found' 00:36:35.489 Process with pid 891854 is not found 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:35.489 00:36:35.489 real 0m18.239s 00:36:35.489 user 0m40.410s 00:36:35.489 sys 0m0.998s 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.489 15:46:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.489 ************************************ 00:36:35.489 END TEST spdkcli_nvmf_tcp 00:36:35.489 ************************************ 00:36:35.489 15:46:23 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:35.489 15:46:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:35.489 15:46:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.489 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:36:35.489 ************************************ 00:36:35.489 START TEST nvmf_identify_passthru 00:36:35.489 ************************************ 00:36:35.489 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:35.490 * Looking for test storage... 00:36:35.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.490 --rc genhtml_branch_coverage=1 00:36:35.490 --rc genhtml_function_coverage=1 00:36:35.490 --rc genhtml_legend=1 00:36:35.490 --rc geninfo_all_blocks=1 00:36:35.490 --rc geninfo_unexecuted_blocks=1 00:36:35.490 00:36:35.490 ' 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.490 --rc genhtml_branch_coverage=1 00:36:35.490 --rc genhtml_function_coverage=1 00:36:35.490 --rc genhtml_legend=1 00:36:35.490 --rc geninfo_all_blocks=1 00:36:35.490 --rc geninfo_unexecuted_blocks=1 00:36:35.490 00:36:35.490 ' 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.490 --rc genhtml_branch_coverage=1 00:36:35.490 --rc genhtml_function_coverage=1 00:36:35.490 --rc genhtml_legend=1 00:36:35.490 --rc geninfo_all_blocks=1 00:36:35.490 --rc geninfo_unexecuted_blocks=1 00:36:35.490 00:36:35.490 ' 00:36:35.490 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.490 --rc genhtml_branch_coverage=1 00:36:35.490 --rc genhtml_function_coverage=1 00:36:35.490 --rc genhtml_legend=1 00:36:35.490 --rc geninfo_all_blocks=1 00:36:35.490 --rc geninfo_unexecuted_blocks=1 00:36:35.490 00:36:35.490 ' 00:36:35.490 15:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.490 15:46:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.490 15:46:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.490 15:46:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.490 15:46:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:35.490 15:46:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:35.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.490 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.490 15:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.490 15:46:23 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.490 15:46:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.491 15:46:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.491 15:46:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.491 15:46:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:35.491 15:46:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.491 15:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.491 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.491 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:35.491 15:46:23 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.491 15:46:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.079 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:42.080 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:42.080 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:42.080 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:42.080 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.080 15:46:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.340 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.340 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.340 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:42.340 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:42.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:36:42.341 00:36:42.341 --- 10.0.0.2 ping statistics --- 00:36:42.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.341 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:36:42.341 00:36:42.341 --- 10.0.0.1 ping statistics --- 00:36:42.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.341 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:42.341 15:46:31 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:42.341 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.341 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:42.341 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:42.601 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:42.601 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:42.601 15:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:42.601 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:42.601 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:42.601 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:42.601 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:42.601 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:43.173 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:43.173 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:43.173 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:43.173 15:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:43.435 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:43.435 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:43.435 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:43.435 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:43.696 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:43.696 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=899260 00:36:43.696 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:43.696 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:43.696 15:46:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 899260 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 899260 ']' 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.696 15:46:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:43.696 [2024-11-20 15:46:32.466281] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:36:43.696 [2024-11-20 15:46:32.466349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.696 [2024-11-20 15:46:32.567182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:43.696 [2024-11-20 15:46:32.621201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.696 [2024-11-20 15:46:32.621255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.696 [2024-11-20 15:46:32.621264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.696 [2024-11-20 15:46:32.621271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.696 [2024-11-20 15:46:32.621282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.696 [2024-11-20 15:46:32.623294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.696 [2024-11-20 15:46:32.623511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.696 [2024-11-20 15:46:32.623511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:43.696 [2024-11-20 15:46:32.623351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:44.638 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 INFO: Log level set to 20 00:36:44.638 INFO: Requests: 00:36:44.638 { 00:36:44.638 "jsonrpc": "2.0", 00:36:44.638 "method": "nvmf_set_config", 00:36:44.638 "id": 1, 00:36:44.638 "params": { 00:36:44.638 "admin_cmd_passthru": { 00:36:44.638 "identify_ctrlr": true 00:36:44.638 } 00:36:44.638 } 00:36:44.638 } 00:36:44.638 00:36:44.638 INFO: response: 00:36:44.638 { 00:36:44.638 "jsonrpc": "2.0", 00:36:44.638 "id": 1, 00:36:44.638 "result": true 00:36:44.638 } 00:36:44.638 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 INFO: Setting log level to 20 00:36:44.638 INFO: Setting log level to 20 00:36:44.638 INFO: Log level set to 20 00:36:44.638 INFO: Log level set to 20 00:36:44.638 INFO: Requests: 00:36:44.638 { 00:36:44.638 "jsonrpc": "2.0", 00:36:44.638 "method": "framework_start_init", 00:36:44.638 "id": 1 00:36:44.638 } 00:36:44.638 00:36:44.638 INFO: Requests: 00:36:44.638 { 00:36:44.638 "jsonrpc": "2.0", 00:36:44.638 "method": "framework_start_init", 00:36:44.638 "id": 1 00:36:44.638 } 00:36:44.638 00:36:44.638 [2024-11-20 15:46:33.344372] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:44.638 INFO: response: 00:36:44.638 { 00:36:44.638 "jsonrpc": "2.0", 00:36:44.638 "id": 1, 00:36:44.638 "result": true 00:36:44.638 } 00:36:44.638 00:36:44.638 INFO: response: 00:36:44.638 { 00:36:44.638 "jsonrpc": "2.0", 00:36:44.638 "id": 1, 00:36:44.638 "result": true 00:36:44.638 } 00:36:44.638 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 INFO: Setting log level to 40 00:36:44.638 INFO: Setting log level to 40 00:36:44.638 INFO: Setting log level to 40 00:36:44.638 [2024-11-20 15:46:33.357714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.899 Nvme0n1 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.899 [2024-11-20 15:46:33.752371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:44.899 [ 00:36:44.899 { 00:36:44.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:44.899 "subtype": "Discovery", 00:36:44.899 "listen_addresses": [], 00:36:44.899 "allow_any_host": true, 00:36:44.899 "hosts": [] 00:36:44.899 }, 00:36:44.899 { 00:36:44.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:44.899 "subtype": "NVMe", 00:36:44.899 "listen_addresses": [ 00:36:44.899 { 00:36:44.899 "trtype": "TCP", 00:36:44.899 "adrfam": "IPv4", 00:36:44.899 "traddr": "10.0.0.2", 00:36:44.899 "trsvcid": "4420" 00:36:44.899 } 00:36:44.899 ], 00:36:44.899 "allow_any_host": true, 00:36:44.899 "hosts": [], 00:36:44.899 "serial_number": "SPDK00000000000001", 00:36:44.899 "model_number": "SPDK bdev Controller", 00:36:44.899 "max_namespaces": 1, 00:36:44.899 "min_cntlid": 1, 00:36:44.899 "max_cntlid": 65519, 00:36:44.899 "namespaces": [ 00:36:44.899 { 00:36:44.899 "nsid": 1, 00:36:44.899 "bdev_name": "Nvme0n1", 00:36:44.899 "name": "Nvme0n1", 00:36:44.899 "nguid": "36344730526054870025384500000044", 00:36:44.899 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:44.899 } 00:36:44.899 ] 00:36:44.899 } 00:36:44.899 ] 00:36:44.899 15:46:33 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:44.899 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:45.159 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:45.160 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:45.160 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:45.160 15:46:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:45.160 15:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:45.160 15:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:45.160 15:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:45.160 15:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:45.160 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.160 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.160 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.160 15:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:45.160 15:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:45.160 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:45.160 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:45.421 rmmod nvme_tcp 00:36:45.421 rmmod nvme_fabrics 00:36:45.421 rmmod nvme_keyring 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 899260 ']' 00:36:45.421 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 899260 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 899260 ']' 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 899260 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 899260 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 899260' 00:36:45.421 killing process with pid 899260 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 899260 00:36:45.421 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 899260 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:45.683 15:46:34 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.683 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:45.683 15:46:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.224 15:46:36 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:48.224 00:36:48.224 real 0m13.010s 00:36:48.224 user 0m9.902s 00:36:48.224 sys 0m6.632s 00:36:48.224 15:46:36 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.224 15:46:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.224 ************************************ 00:36:48.224 END TEST nvmf_identify_passthru 00:36:48.224 ************************************ 00:36:48.224 15:46:36 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:48.224 15:46:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.224 15:46:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.224 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:36:48.224 ************************************ 00:36:48.224 START TEST nvmf_dif 00:36:48.224 ************************************ 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:48.224 * Looking for test storage... 00:36:48.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:48.224 15:46:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:48.224 15:46:36 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:48.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.224 --rc genhtml_branch_coverage=1 00:36:48.225 --rc genhtml_function_coverage=1 00:36:48.225 --rc genhtml_legend=1 00:36:48.225 --rc geninfo_all_blocks=1 00:36:48.225 --rc geninfo_unexecuted_blocks=1 00:36:48.225 00:36:48.225 ' 00:36:48.225 15:46:36 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:48.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.225 --rc genhtml_branch_coverage=1 00:36:48.225 --rc genhtml_function_coverage=1 00:36:48.225 --rc genhtml_legend=1 00:36:48.225 --rc geninfo_all_blocks=1 00:36:48.225 --rc geninfo_unexecuted_blocks=1 00:36:48.225 00:36:48.225 ' 00:36:48.225 15:46:36 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:48.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.225 --rc genhtml_branch_coverage=1 00:36:48.225 --rc genhtml_function_coverage=1 00:36:48.225 --rc genhtml_legend=1 00:36:48.225 --rc geninfo_all_blocks=1 00:36:48.225 --rc geninfo_unexecuted_blocks=1 00:36:48.225 00:36:48.225 ' 00:36:48.225 15:46:36 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:48.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.225 --rc genhtml_branch_coverage=1 00:36:48.225 --rc genhtml_function_coverage=1 00:36:48.225 --rc genhtml_legend=1 00:36:48.225 --rc geninfo_all_blocks=1 00:36:48.225 --rc geninfo_unexecuted_blocks=1 00:36:48.225 00:36:48.225 ' 00:36:48.225 15:46:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.225 15:46:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:48.225 15:46:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.225 15:46:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.225 15:46:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.225 15:46:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.225 15:46:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.225 15:46:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.225 15:46:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:48.225 15:46:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:48.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:48.225 15:46:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:48.225 15:46:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:48.225 15:46:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:48.225 15:46:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:48.225 15:46:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.225 15:46:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:48.225 15:46:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:48.225 15:46:36 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:48.225 15:46:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:56.359 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:56.359 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:56.359 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:56.359 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:56.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:36:56.359 00:36:56.359 --- 10.0.0.2 ping statistics --- 00:36:56.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.359 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:36:56.359 00:36:56.359 --- 10.0.0.1 ping statistics --- 00:36:56.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.359 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:56.359 15:46:44 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:58.906 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:58.906 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:58.907 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:58.907 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:59.167 15:46:48 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.167 15:46:48 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:59.167 15:46:48 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:59.167 15:46:48 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.167 15:46:48 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:59.168 15:46:48 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:59.429 15:46:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:59.429 15:46:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:59.429 15:46:48 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:59.429 15:46:48 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=905395 00:36:59.429 15:46:48 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 905395 00:36:59.429 15:46:48 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 905395 ']' 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.429 15:46:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:59.429 [2024-11-20 15:46:48.229663] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:36:59.429 [2024-11-20 15:46:48.229728] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:59.429 [2024-11-20 15:46:48.327795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.429 [2024-11-20 15:46:48.378429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:59.429 [2024-11-20 15:46:48.378481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:59.429 [2024-11-20 15:46:48.378490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:59.429 [2024-11-20 15:46:48.378497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:59.429 [2024-11-20 15:46:48.378503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:59.429 [2024-11-20 15:46:48.379337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.381 15:46:49 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.381 15:46:49 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:00.382 15:46:49 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:00.382 15:46:49 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.382 15:46:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:00.382 15:46:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:00.382 [2024-11-20 15:46:49.093620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.382 15:46:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.382 15:46:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:00.382 ************************************ 00:37:00.382 START TEST fio_dif_1_default 00:37:00.382 ************************************ 00:37:00.382 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:00.382 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:00.383 bdev_null0 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:00.383 [2024-11-20 15:46:49.186098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.383 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:00.384 { 00:37:00.384 "params": { 00:37:00.384 "name": "Nvme$subsystem", 00:37:00.384 "trtype": "$TEST_TRANSPORT", 00:37:00.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.384 "adrfam": "ipv4", 00:37:00.384 "trsvcid": "$NVMF_PORT", 00:37:00.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.384 "hdgst": ${hdgst:-false}, 00:37:00.384 "ddgst": ${ddgst:-false} 00:37:00.384 }, 00:37:00.384 "method": "bdev_nvme_attach_controller" 00:37:00.384 } 00:37:00.384 EOF 00:37:00.384 )") 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:00.384 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:00.385 15:46:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:00.385 "params": { 00:37:00.385 "name": "Nvme0", 00:37:00.385 "trtype": "tcp", 00:37:00.385 "traddr": "10.0.0.2", 00:37:00.385 "adrfam": "ipv4", 00:37:00.385 "trsvcid": "4420", 00:37:00.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:00.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:00.385 "hdgst": false, 00:37:00.385 "ddgst": false 00:37:00.385 }, 00:37:00.385 "method": "bdev_nvme_attach_controller" 00:37:00.385 }' 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:00.386 15:46:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.651 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:00.651 fio-3.35 00:37:00.651 Starting 1 thread 00:37:12.885 00:37:12.885 filename0: (groupid=0, jobs=1): err= 0: pid=905922: Wed Nov 20 15:47:00 2024 00:37:12.885 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10033msec) 00:37:12.885 slat (nsec): min=5509, max=61765, avg=6248.79, stdev=2060.33 00:37:12.885 clat (usec): min=577, max=44310, avg=21103.82, stdev=20188.13 00:37:12.885 lat (usec): min=583, max=44371, avg=21110.07, stdev=20188.14 00:37:12.885 clat percentiles (usec): 00:37:12.885 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 873], 00:37:12.885 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:37:12.885 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:37:12.885 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:37:12.885 | 99.99th=[44303] 00:37:12.885 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=758.40, stdev=23.45, samples=20 00:37:12.885 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:37:12.885 lat (usec) : 750=2.89%, 1000=44.11% 00:37:12.885 lat (msec) : 2=2.89%, 50=50.11% 00:37:12.885 cpu : usr=93.35%, sys=6.43%, ctx=9, majf=0, minf=253 00:37:12.885 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.885 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.885 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:12.885 00:37:12.885 Run status group 0 (all jobs): 00:37:12.885 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10033-10033msec 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.885 00:37:12.885 real 0m11.203s 00:37:12.885 user 0m17.705s 00:37:12.885 sys 0m1.089s 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.885 ************************************ 00:37:12.885 END TEST fio_dif_1_default 00:37:12.885 ************************************ 00:37:12.885 15:47:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:12.885 15:47:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:12.885 15:47:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.885 15:47:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.885 ************************************ 00:37:12.885 START TEST fio_dif_1_multi_subsystems 00:37:12.885 ************************************ 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.885 bdev_null0 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.885 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.886 [2024-11-20 15:47:00.467116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.886 bdev_null1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:12.886 { 00:37:12.886 "params": { 00:37:12.886 "name": "Nvme$subsystem", 00:37:12.886 "trtype": "$TEST_TRANSPORT", 00:37:12.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:12.886 "adrfam": "ipv4", 00:37:12.886 "trsvcid": "$NVMF_PORT", 00:37:12.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:12.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:12.886 "hdgst": ${hdgst:-false}, 00:37:12.886 "ddgst": ${ddgst:-false} 00:37:12.886 }, 00:37:12.886 "method": "bdev_nvme_attach_controller" 00:37:12.886 } 00:37:12.886 EOF 00:37:12.886 )") 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:12.886 { 00:37:12.886 "params": { 00:37:12.886 "name": "Nvme$subsystem", 00:37:12.886 "trtype": "$TEST_TRANSPORT", 00:37:12.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:12.886 "adrfam": "ipv4", 00:37:12.886 "trsvcid": "$NVMF_PORT", 00:37:12.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:12.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:12.886 "hdgst": ${hdgst:-false}, 00:37:12.886 "ddgst": ${ddgst:-false} 00:37:12.886 }, 00:37:12.886 "method": "bdev_nvme_attach_controller" 00:37:12.886 } 00:37:12.886 EOF 00:37:12.886 )") 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:12.886 "params": { 00:37:12.886 "name": "Nvme0", 00:37:12.886 "trtype": "tcp", 00:37:12.886 "traddr": "10.0.0.2", 00:37:12.886 "adrfam": "ipv4", 00:37:12.886 "trsvcid": "4420", 00:37:12.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.886 "hdgst": false, 00:37:12.886 "ddgst": false 00:37:12.886 }, 00:37:12.886 "method": "bdev_nvme_attach_controller" 00:37:12.886 },{ 00:37:12.886 "params": { 00:37:12.886 "name": "Nvme1", 00:37:12.886 "trtype": "tcp", 00:37:12.886 "traddr": "10.0.0.2", 00:37:12.886 "adrfam": "ipv4", 00:37:12.886 "trsvcid": "4420", 00:37:12.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:12.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:12.886 "hdgst": false, 00:37:12.886 "ddgst": false 00:37:12.886 }, 00:37:12.886 "method": "bdev_nvme_attach_controller" 00:37:12.886 }' 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:12.886 15:47:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.886 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:12.886 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:12.886 fio-3.35 00:37:12.886 Starting 2 threads 00:37:23.063 00:37:23.063 filename0: (groupid=0, jobs=1): err= 0: pid=908169: Wed Nov 20 15:47:11 2024 00:37:23.063 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:37:23.063 slat (nsec): min=5504, max=28560, avg=6373.77, stdev=1529.15 00:37:23.063 clat (usec): min=546, max=42300, avg=21040.15, stdev=20159.14 00:37:23.063 lat (usec): min=554, max=42328, avg=21046.52, stdev=20159.07 00:37:23.063 clat percentiles (usec): 00:37:23.063 | 1.00th=[ 619], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 848], 00:37:23.063 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:37:23.063 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:23.063 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:23.063 | 99.99th=[42206] 00:37:23.063 bw ( KiB/s): min= 704, max= 768, per=50.09%, avg=761.26, stdev=20.18, samples=19 00:37:23.063 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:37:23.063 lat (usec) : 750=4.00%, 1000=45.00% 00:37:23.063 lat (msec) : 2=0.89%, 50=50.11% 00:37:23.063 cpu : usr=95.70%, sys=4.09%, ctx=13, majf=0, minf=184 00:37:23.063 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.063 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.063 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:23.063 filename1: (groupid=0, jobs=1): err= 0: pid=908170: Wed Nov 20 15:47:11 2024 00:37:23.063 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10005msec) 00:37:23.063 slat (nsec): min=5505, max=29901, avg=6477.37, stdev=1493.47 00:37:23.063 clat (usec): min=443, max=42192, avg=21043.98, stdev=20167.50 00:37:23.063 lat (usec): min=451, max=42222, avg=21050.46, stdev=20167.45 00:37:23.063 clat percentiles (usec): 00:37:23.063 | 1.00th=[ 603], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 840], 00:37:23.063 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:37:23.063 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:23.063 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:23.063 | 99.99th=[42206] 00:37:23.063 bw ( KiB/s): min= 704, max= 768, per=49.89%, avg=758.40, stdev=23.45, samples=20 00:37:23.063 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:37:23.063 lat (usec) : 500=0.32%, 750=2.21%, 1000=46.05% 00:37:23.063 lat (msec) : 2=1.32%, 50=50.11% 00:37:23.063 cpu : usr=95.88%, sys=3.90%, ctx=13, majf=0, minf=87 00:37:23.063 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.063 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.063 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:23.063 00:37:23.063 Run status group 0 (all jobs): 00:37:23.063 READ: bw=1519KiB/s (1556kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=14.8MiB (15.6MB), run=10003-10005msec 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:23.063 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 00:37:23.064 real 0m11.346s 00:37:23.064 user 0m36.355s 00:37:23.064 sys 0m1.173s 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 ************************************ 00:37:23.064 END TEST fio_dif_1_multi_subsystems 00:37:23.064 ************************************ 00:37:23.064 15:47:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:23.064 15:47:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.064 15:47:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 ************************************ 00:37:23.064 START TEST fio_dif_rand_params 00:37:23.064 ************************************ 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 bdev_null0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.064 [2024-11-20 15:47:11.893943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:23.064 { 00:37:23.064 "params": { 00:37:23.064 "name": "Nvme$subsystem", 00:37:23.064 "trtype": "$TEST_TRANSPORT", 00:37:23.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:23.064 "adrfam": "ipv4", 00:37:23.064 "trsvcid": "$NVMF_PORT", 00:37:23.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:23.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:23.064 "hdgst": ${hdgst:-false}, 00:37:23.064 "ddgst": ${ddgst:-false} 00:37:23.064 }, 00:37:23.064 "method": "bdev_nvme_attach_controller" 00:37:23.064 } 00:37:23.064 EOF 00:37:23.064 )") 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:23.064 "params": { 00:37:23.064 "name": "Nvme0", 00:37:23.064 "trtype": "tcp", 00:37:23.064 "traddr": "10.0.0.2", 00:37:23.064 "adrfam": "ipv4", 00:37:23.064 "trsvcid": "4420", 00:37:23.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.064 "hdgst": false, 00:37:23.064 "ddgst": false 00:37:23.064 }, 00:37:23.064 "method": "bdev_nvme_attach_controller" 00:37:23.064 }' 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:23.064 15:47:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.647 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:23.648 ... 00:37:23.648 fio-3.35 00:37:23.648 Starting 3 threads 00:37:28.937 00:37:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=910380: Wed Nov 20 15:47:17 2024 00:37:28.937 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(155MiB/5041msec) 00:37:28.937 slat (nsec): min=5523, max=35896, avg=6605.89, stdev=1787.83 00:37:28.937 clat (usec): min=3366, max=89182, avg=12156.55, stdev=14651.78 00:37:28.937 lat (usec): min=3372, max=89189, avg=12163.15, stdev=14651.77 00:37:28.937 clat percentiles (usec): 00:37:28.937 | 1.00th=[ 3851], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5800], 00:37:28.937 | 30.00th=[ 6194], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6915], 00:37:28.937 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[46924], 95.00th=[47973], 00:37:28.937 | 99.00th=[49021], 99.50th=[49021], 99.90th=[88605], 99.95th=[89654], 00:37:28.937 | 99.99th=[89654] 00:37:28.937 bw ( KiB/s): min=19456, max=46592, per=28.58%, avg=31744.00, stdev=9753.42, samples=10 00:37:28.937 iops : min= 152, max= 364, avg=248.00, stdev=76.20, samples=10 00:37:28.937 lat (msec) : 4=1.53%, 10=84.79%, 50=13.35%, 100=0.32% 00:37:28.937 cpu : usr=94.01%, sys=4.48%, ctx=326, majf=0, minf=96 00:37:28.937 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.937 issued rwts: total=1243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=910381: Wed Nov 20 15:47:17 2024 00:37:28.937 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(190MiB/5046msec) 00:37:28.937 slat (nsec): min=5568, max=42082, avg=8072.14, stdev=1878.27 00:37:28.937 clat (usec): min=4357, max=88040, avg=9912.62, stdev=8800.93 00:37:28.937 lat (usec): min=4366, max=88049, avg=9920.69, stdev=8801.14 00:37:28.937 clat percentiles (usec): 00:37:28.937 | 1.00th=[ 4817], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6456], 00:37:28.937 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8586], 00:37:28.937 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[12125], 00:37:28.937 | 99.00th=[48497], 99.50th=[50070], 99.90th=[87557], 99.95th=[87557], 00:37:28.937 | 99.99th=[87557] 00:37:28.937 bw ( KiB/s): min=22272, max=48640, per=35.02%, avg=38886.40, stdev=8881.60, samples=10 00:37:28.937 iops : min= 174, max= 380, avg=303.80, stdev=69.39, samples=10 00:37:28.937 lat (msec) : 10=76.33%, 20=19.46%, 50=3.75%, 100=0.46% 00:37:28.937 cpu : usr=94.71%, sys=5.07%, ctx=6, majf=0, minf=118 00:37:28.937 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.937 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=910382: Wed Nov 20 15:47:17 2024 00:37:28.937 read: IOPS=319, BW=40.0MiB/s (41.9MB/s)(202MiB/5045msec) 00:37:28.937 slat (nsec): min=5538, max=41402, avg=8244.13, stdev=1844.07 00:37:28.937 clat (usec): min=3717, max=88183, avg=9340.93, stdev=7773.45 00:37:28.937 lat (usec): min=3726, max=88192, avg=9349.17, stdev=7773.64 00:37:28.937 clat percentiles (usec): 00:37:28.937 | 1.00th=[ 4490], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6194], 00:37:28.937 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8455], 00:37:28.937 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[10945], 95.00th=[11600], 00:37:28.937 | 99.00th=[47973], 99.50th=[49021], 99.90th=[51119], 99.95th=[88605], 00:37:28.937 | 99.99th=[88605] 00:37:28.937 bw ( KiB/s): min=23296, max=48384, per=37.16%, avg=41267.20, stdev=7562.29, samples=10 00:37:28.937 iops : min= 182, max= 378, avg=322.40, stdev=59.08, samples=10 00:37:28.937 lat (msec) : 4=0.12%, 10=79.80%, 20=16.48%, 50=3.47%, 100=0.12% 00:37:28.937 cpu : usr=94.23%, sys=5.55%, ctx=6, majf=0, minf=60 00:37:28.937 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.937 issued rwts: total=1614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:28.937 00:37:28.937 Run status group 0 (all jobs): 00:37:28.937 READ: bw=108MiB/s (114MB/s), 30.8MiB/s-40.0MiB/s (32.3MB/s-41.9MB/s), io=547MiB (574MB), run=5041-5046msec 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 bdev_null0 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 [2024-11-20 15:47:18.058633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 bdev_null1 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.198 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.199 bdev_null2 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.199 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:29.460 { 00:37:29.460 "params": { 00:37:29.460 "name": "Nvme$subsystem", 00:37:29.460 "trtype": "$TEST_TRANSPORT", 00:37:29.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.460 "adrfam": "ipv4", 00:37:29.460 "trsvcid": "$NVMF_PORT", 00:37:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.460 "hdgst": ${hdgst:-false}, 00:37:29.460 "ddgst": ${ddgst:-false} 00:37:29.460 }, 00:37:29.460 "method": "bdev_nvme_attach_controller" 00:37:29.460 } 00:37:29.460 EOF 00:37:29.460 )") 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:29.460 { 00:37:29.460 "params": { 00:37:29.460 "name": "Nvme$subsystem", 00:37:29.460 "trtype": "$TEST_TRANSPORT", 00:37:29.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.460 "adrfam": "ipv4", 00:37:29.460 "trsvcid": "$NVMF_PORT", 00:37:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.460 "hdgst": ${hdgst:-false}, 00:37:29.460 "ddgst": ${ddgst:-false} 00:37:29.460 }, 00:37:29.460 "method": "bdev_nvme_attach_controller" 00:37:29.460 } 00:37:29.460 EOF 00:37:29.460 )") 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:29.460 { 00:37:29.460 "params": { 00:37:29.460 "name": "Nvme$subsystem", 00:37:29.460 "trtype": "$TEST_TRANSPORT", 00:37:29.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.460 "adrfam": "ipv4", 00:37:29.460 "trsvcid": "$NVMF_PORT", 00:37:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.460 "hdgst": ${hdgst:-false}, 00:37:29.460 "ddgst": ${ddgst:-false} 00:37:29.460 }, 00:37:29.460 "method": "bdev_nvme_attach_controller" 00:37:29.460 } 00:37:29.460 EOF 00:37:29.460 )") 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:29.460 "params": { 00:37:29.460 "name": "Nvme0", 00:37:29.460 "trtype": "tcp", 00:37:29.460 "traddr": "10.0.0.2", 00:37:29.460 "adrfam": "ipv4", 00:37:29.460 "trsvcid": "4420", 00:37:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:29.460 "hdgst": false, 00:37:29.460 "ddgst": false 00:37:29.460 }, 00:37:29.460 "method": "bdev_nvme_attach_controller" 00:37:29.460 },{ 00:37:29.460 "params": { 00:37:29.460 "name": "Nvme1", 00:37:29.460 "trtype": "tcp", 00:37:29.460 "traddr": "10.0.0.2", 00:37:29.460 "adrfam": "ipv4", 00:37:29.460 "trsvcid": "4420", 00:37:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:29.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:29.460 "hdgst": false, 00:37:29.460 "ddgst": false 00:37:29.460 }, 00:37:29.460 "method": "bdev_nvme_attach_controller" 00:37:29.460 },{ 00:37:29.460 "params": { 00:37:29.460 "name": "Nvme2", 00:37:29.460 "trtype": "tcp", 00:37:29.460 "traddr": "10.0.0.2", 00:37:29.460 "adrfam": "ipv4", 00:37:29.460 "trsvcid": "4420", 00:37:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:29.460 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:29.460 "hdgst": false, 00:37:29.460 "ddgst": false 00:37:29.460 }, 00:37:29.460 "method": "bdev_nvme_attach_controller" 00:37:29.460 }' 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:29.460 15:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.721 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:29.721 ... 00:37:29.721 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:29.721 ... 00:37:29.721 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:29.721 ... 00:37:29.721 fio-3.35 00:37:29.721 Starting 24 threads 00:37:41.958 00:37:41.958 filename0: (groupid=0, jobs=1): err= 0: pid=911881: Wed Nov 20 15:47:29 2024 00:37:41.958 read: IOPS=693, BW=2776KiB/s (2843kB/s)(27.1MiB/10006msec) 00:37:41.958 slat (nsec): min=5658, max=87726, avg=11031.77, stdev=9412.45 00:37:41.958 clat (usec): min=1032, max=25344, avg=22966.74, stdev=4135.62 00:37:41.958 lat (usec): min=1044, max=25351, avg=22977.77, stdev=4134.32 00:37:41.958 clat percentiles (usec): 00:37:41.958 | 1.00th=[ 1352], 5.00th=[22676], 10.00th=[23462], 20.00th=[23462], 00:37:41.958 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.958 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.958 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:37:41.958 | 99.99th=[25297] 00:37:41.958 bw ( KiB/s): min= 2560, max= 4608, per=4.30%, avg=2782.32, stdev=445.13, samples=19 00:37:41.958 iops : min= 640, max= 1152, avg=695.58, stdev=111.28, samples=19 00:37:41.958 lat (msec) : 2=2.49%, 4=0.66%, 10=0.30%, 20=1.38%, 50=95.16% 00:37:41.958 cpu : usr=98.43%, sys=1.05%, ctx=142, majf=0, minf=45 00:37:41.958 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:41.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 issued rwts: total=6944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.958 filename0: (groupid=0, jobs=1): err= 0: pid=911882: Wed Nov 20 15:47:29 2024 00:37:41.958 read: IOPS=669, BW=2679KiB/s (2744kB/s)(26.2MiB/10002msec) 00:37:41.958 slat (nsec): min=5517, max=58514, avg=13308.54, stdev=7616.39 00:37:41.958 clat (usec): min=7099, max=42434, avg=23779.33, stdev=2126.10 00:37:41.958 lat (usec): min=7104, max=42453, avg=23792.64, stdev=2126.85 00:37:41.958 clat percentiles (usec): 00:37:41.958 | 1.00th=[14746], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.958 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.958 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24773], 00:37:41.958 | 99.00th=[32375], 99.50th=[35390], 99.90th=[42206], 99.95th=[42206], 00:37:41.958 | 99.99th=[42206] 00:37:41.958 bw ( KiB/s): min= 2554, max= 2736, per=4.13%, avg=2669.68, stdev=51.21, samples=19 00:37:41.958 iops : min= 638, max= 684, avg=667.37, stdev=12.86, samples=19 00:37:41.958 lat (msec) : 10=0.30%, 20=2.55%, 50=97.15% 00:37:41.958 cpu : usr=98.85%, sys=0.73%, ctx=124, majf=0, minf=49 00:37:41.958 IO depths : 1=4.7%, 2=10.5%, 4=23.6%, 8=53.4%, 16=7.9%, 32=0.0%, >=64=0.0% 00:37:41.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 issued rwts: total=6700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.958 filename0: (groupid=0, jobs=1): err= 0: pid=911883: Wed Nov 20 15:47:29 2024 00:37:41.958 read: IOPS=670, BW=2684KiB/s (2748kB/s)(26.2MiB/10009msec) 00:37:41.958 slat (nsec): min=5654, max=62649, avg=15931.69, stdev=8550.34 00:37:41.958 clat (usec): min=13807, max=34248, avg=23704.32, stdev=1397.63 00:37:41.958 lat (usec): min=13826, max=34255, avg=23720.25, stdev=1398.14 00:37:41.958 clat percentiles (usec): 00:37:41.958 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.958 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.958 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.958 | 99.00th=[28443], 99.50th=[29492], 99.90th=[34341], 99.95th=[34341], 00:37:41.958 | 99.99th=[34341] 00:37:41.958 bw ( KiB/s): min= 2560, max= 2784, per=4.14%, avg=2679.26, stdev=47.47, samples=19 00:37:41.958 iops : min= 640, max= 696, avg=669.79, stdev=11.87, samples=19 00:37:41.958 lat (msec) : 20=2.93%, 50=97.07% 00:37:41.958 cpu : usr=98.32%, sys=1.25%, ctx=78, majf=0, minf=21 00:37:41.958 IO depths : 1=5.6%, 2=11.6%, 4=24.2%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:41.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 issued rwts: total=6716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.958 filename0: (groupid=0, jobs=1): err= 0: pid=911884: Wed Nov 20 15:47:29 2024 00:37:41.958 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10017msec) 00:37:41.958 slat (nsec): min=5660, max=55024, avg=9627.65, stdev=5809.69 00:37:41.958 clat (usec): min=11033, max=26773, avg=23765.91, stdev=1023.21 00:37:41.958 lat (usec): min=11043, max=26793, avg=23775.53, stdev=1022.86 00:37:41.958 clat percentiles (usec): 00:37:41.958 | 1.00th=[19530], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:41.958 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.958 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.958 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:37:41.958 | 99.99th=[26870] 00:37:41.958 bw ( KiB/s): min= 2560, max= 2821, per=4.14%, avg=2681.53, stdev=52.53, samples=19 00:37:41.958 iops : min= 640, max= 705, avg=670.37, stdev=13.09, samples=19 00:37:41.958 lat (msec) : 20=1.19%, 50=98.81% 00:37:41.958 cpu : usr=98.87%, sys=0.84%, ctx=29, majf=0, minf=33 00:37:41.958 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:41.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.958 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.958 filename0: (groupid=0, jobs=1): err= 0: pid=911885: Wed Nov 20 15:47:29 2024 00:37:41.958 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10004msec) 00:37:41.958 slat (nsec): min=5653, max=65660, avg=18040.79, stdev=10134.95 00:37:41.958 clat (usec): min=4810, max=42657, avg=23630.19, stdev=2292.50 00:37:41.958 lat (usec): min=4818, max=42674, avg=23648.23, stdev=2292.77 00:37:41.958 clat percentiles (usec): 00:37:41.959 | 1.00th=[15533], 5.00th=[21103], 10.00th=[23200], 20.00th=[23462], 00:37:41.959 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:37:41.959 | 99.00th=[29230], 99.50th=[33162], 99.90th=[42730], 99.95th=[42730], 00:37:41.959 | 99.99th=[42730] 00:37:41.959 bw ( KiB/s): min= 2544, max= 2848, per=4.14%, avg=2677.79, stdev=67.01, samples=19 00:37:41.959 iops : min= 636, max= 712, avg=669.37, stdev=16.80, samples=19 00:37:41.959 lat (msec) : 10=0.48%, 20=4.00%, 50=95.53% 00:37:41.959 cpu : usr=98.79%, sys=0.86%, ctx=62, majf=0, minf=26 00:37:41.959 IO depths : 1=4.9%, 2=10.0%, 4=20.7%, 8=56.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=93.1%, 8=1.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename0: (groupid=0, jobs=1): err= 0: pid=911886: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10003msec) 00:37:41.959 slat (nsec): min=5671, max=87010, avg=20137.61, stdev=12791.53 00:37:41.959 clat (usec): min=6675, max=42244, avg=23688.07, stdev=1594.07 00:37:41.959 lat (usec): min=6692, max=42261, avg=23708.20, stdev=1594.28 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[20055], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.959 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[25035], 99.50th=[25035], 99.90th=[42206], 99.95th=[42206], 00:37:41.959 | 99.99th=[42206] 00:37:41.959 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2666.84, stdev=48.47, samples=19 00:37:41.959 iops : min= 638, max= 672, avg=666.63, stdev=12.17, samples=19 00:37:41.959 lat (msec) : 10=0.48%, 20=0.54%, 50=98.99% 00:37:41.959 cpu : usr=98.79%, sys=0.78%, ctx=106, majf=0, minf=27 00:37:41.959 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename0: (groupid=0, jobs=1): err= 0: pid=911887: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.5MiB/10009msec) 00:37:41.959 slat (nsec): min=5644, max=68822, avg=20936.88, stdev=12859.45 00:37:41.959 clat (usec): min=5053, max=38524, avg=23392.02, stdev=2471.11 00:37:41.959 lat (usec): min=5061, max=38544, avg=23412.96, stdev=2472.44 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[13042], 5.00th=[18482], 10.00th=[23200], 20.00th=[23462], 00:37:41.959 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[29754], 99.50th=[34866], 99.90th=[36963], 99.95th=[38536], 00:37:41.959 | 99.99th=[38536] 00:37:41.959 bw ( KiB/s): min= 2560, max= 3120, per=4.20%, avg=2717.47, stdev=117.49, samples=19 00:37:41.959 iops : min= 640, max= 780, avg=679.37, stdev=29.37, samples=19 00:37:41.959 lat (msec) : 10=0.57%, 20=5.15%, 50=94.27% 00:37:41.959 cpu : usr=98.92%, sys=0.78%, ctx=40, majf=0, minf=29 00:37:41.959 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename0: (groupid=0, jobs=1): err= 0: pid=911888: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=681, BW=2724KiB/s (2790kB/s)(26.6MiB/10002msec) 00:37:41.959 slat (nsec): min=5652, max=87782, avg=17242.65, stdev=13512.44 00:37:41.959 clat (usec): min=7774, max=39549, avg=23354.29, stdev=2143.13 00:37:41.959 lat (usec): min=7789, max=39555, avg=23371.53, stdev=2144.57 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[14091], 5.00th=[17433], 10.00th=[23200], 20.00th=[23462], 00:37:41.959 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[25035], 99.50th=[29492], 99.90th=[36963], 99.95th=[39584], 00:37:41.959 | 99.99th=[39584] 00:37:41.959 bw ( KiB/s): min= 2560, max= 3168, per=4.22%, avg=2727.00, stdev=136.19, samples=19 00:37:41.959 iops : min= 640, max= 792, avg=681.74, stdev=34.04, samples=19 00:37:41.959 lat (msec) : 10=0.06%, 20=6.12%, 50=93.82% 00:37:41.959 cpu : usr=98.92%, sys=0.78%, ctx=40, majf=0, minf=40 00:37:41.959 IO depths : 1=5.8%, 2=11.6%, 4=23.8%, 8=52.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename1: (groupid=0, jobs=1): err= 0: pid=911889: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=672, BW=2690KiB/s (2754kB/s)(26.3MiB/10018msec) 00:37:41.959 slat (nsec): min=5650, max=85992, avg=14566.90, stdev=13071.82 00:37:41.959 clat (usec): min=9660, max=25271, avg=23681.54, stdev=1353.54 00:37:41.959 lat (usec): min=9668, max=25282, avg=23696.11, stdev=1352.52 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[15795], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.959 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:37:41.959 | 99.99th=[25297] 00:37:41.959 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2688.00, stdev=73.90, samples=19 00:37:41.959 iops : min= 640, max= 736, avg=672.00, stdev=18.48, samples=19 00:37:41.959 lat (msec) : 10=0.24%, 20=1.19%, 50=98.57% 00:37:41.959 cpu : usr=98.67%, sys=0.88%, ctx=104, majf=0, minf=28 00:37:41.959 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename1: (groupid=0, jobs=1): err= 0: pid=911890: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=669, BW=2678KiB/s (2743kB/s)(26.2MiB/10012msec) 00:37:41.959 slat (nsec): min=5673, max=61090, avg=17852.10, stdev=10747.84 00:37:41.959 clat (usec): min=14035, max=34128, avg=23733.93, stdev=1095.53 00:37:41.959 lat (usec): min=14047, max=34162, avg=23751.78, stdev=1095.42 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.959 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[25297], 99.50th=[27395], 99.90th=[32900], 99.95th=[32900], 00:37:41.959 | 99.99th=[34341] 00:37:41.959 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2674.53, stdev=40.36, samples=19 00:37:41.959 iops : min= 640, max= 672, avg=668.63, stdev=10.09, samples=19 00:37:41.959 lat (msec) : 20=1.10%, 50=98.90% 00:37:41.959 cpu : usr=99.07%, sys=0.64%, ctx=46, majf=0, minf=43 00:37:41.959 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename1: (groupid=0, jobs=1): err= 0: pid=911891: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=674, BW=2699KiB/s (2763kB/s)(26.4MiB/10008msec) 00:37:41.959 slat (nsec): min=5714, max=68961, avg=14888.07, stdev=10878.72 00:37:41.959 clat (usec): min=5014, max=25286, avg=23593.30, stdev=1781.46 00:37:41.959 lat (usec): min=5026, max=25294, avg=23608.19, stdev=1780.71 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[12256], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.959 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:37:41.959 | 99.99th=[25297] 00:37:41.959 bw ( KiB/s): min= 2560, max= 3072, per=4.18%, avg=2701.47, stdev=103.59, samples=19 00:37:41.959 iops : min= 640, max= 768, avg=675.37, stdev=25.90, samples=19 00:37:41.959 lat (msec) : 10=0.71%, 20=0.95%, 50=98.34% 00:37:41.959 cpu : usr=99.04%, sys=0.67%, ctx=38, majf=0, minf=33 00:37:41.959 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.959 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.959 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.959 filename1: (groupid=0, jobs=1): err= 0: pid=911892: Wed Nov 20 15:47:29 2024 00:37:41.959 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.2MiB/10003msec) 00:37:41.959 slat (nsec): min=5585, max=63651, avg=17868.46, stdev=11027.35 00:37:41.959 clat (usec): min=5182, max=50277, avg=23728.29, stdev=1759.93 00:37:41.959 lat (usec): min=5192, max=50298, avg=23746.16, stdev=1760.04 00:37:41.959 clat percentiles (usec): 00:37:41.959 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.959 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.959 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.959 | 99.00th=[24773], 99.50th=[25297], 99.90th=[50070], 99.95th=[50070], 00:37:41.959 | 99.99th=[50070] 00:37:41.959 bw ( KiB/s): min= 2436, max= 2688, per=4.12%, avg=2667.05, stdev=63.10, samples=19 00:37:41.959 iops : min= 609, max= 672, avg=666.68, stdev=15.76, samples=19 00:37:41.959 lat (msec) : 10=0.34%, 20=0.34%, 50=99.07%, 100=0.24% 00:37:41.959 cpu : usr=99.10%, sys=0.63%, ctx=16, majf=0, minf=31 00:37:41.959 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename1: (groupid=0, jobs=1): err= 0: pid=911893: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=670, BW=2680KiB/s (2745kB/s)(26.2MiB/10005msec) 00:37:41.960 slat (nsec): min=5658, max=71149, avg=16527.62, stdev=12451.08 00:37:41.960 clat (usec): min=9734, max=31207, avg=23740.16, stdev=944.59 00:37:41.960 lat (usec): min=9745, max=31215, avg=23756.68, stdev=944.23 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.960 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.960 | 99.00th=[25035], 99.50th=[25035], 99.90th=[30540], 99.95th=[31065], 00:37:41.960 | 99.99th=[31327] 00:37:41.960 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2681.26, stdev=29.37, samples=19 00:37:41.960 iops : min= 640, max= 672, avg=670.32, stdev= 7.34, samples=19 00:37:41.960 lat (msec) : 10=0.03%, 20=0.69%, 50=99.28% 00:37:41.960 cpu : usr=98.49%, sys=1.03%, ctx=199, majf=0, minf=40 00:37:41.960 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename1: (groupid=0, jobs=1): err= 0: pid=911894: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=668, BW=2675KiB/s (2739kB/s)(26.2MiB/10043msec) 00:37:41.960 slat (nsec): min=5651, max=86063, avg=19049.40, stdev=14774.04 00:37:41.960 clat (usec): min=5434, max=51180, avg=23689.26, stdev=2786.27 00:37:41.960 lat (usec): min=5460, max=51199, avg=23708.31, stdev=2786.19 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[15270], 5.00th=[19006], 10.00th=[22938], 20.00th=[23462], 00:37:41.960 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[27657], 00:37:41.960 | 99.00th=[34341], 99.50th=[36439], 99.90th=[45876], 99.95th=[45876], 00:37:41.960 | 99.99th=[51119] 00:37:41.960 bw ( KiB/s): min= 2432, max= 2768, per=4.13%, avg=2672.79, stdev=73.42, samples=19 00:37:41.960 iops : min= 608, max= 692, avg=668.16, stdev=18.35, samples=19 00:37:41.960 lat (msec) : 10=0.15%, 20=6.15%, 50=93.67%, 100=0.03% 00:37:41.960 cpu : usr=98.63%, sys=0.93%, ctx=63, majf=0, minf=36 00:37:41.960 IO depths : 1=4.6%, 2=9.5%, 4=20.1%, 8=57.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename1: (groupid=0, jobs=1): err= 0: pid=911895: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10004msec) 00:37:41.960 slat (nsec): min=5658, max=86094, avg=20981.14, stdev=13816.92 00:37:41.960 clat (usec): min=7087, max=42481, avg=23690.87, stdev=1585.16 00:37:41.960 lat (usec): min=7093, max=42499, avg=23711.85, stdev=1585.07 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.960 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.960 | 99.00th=[25035], 99.50th=[25035], 99.90th=[42206], 99.95th=[42206], 00:37:41.960 | 99.99th=[42730] 00:37:41.960 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2666.84, stdev=48.47, samples=19 00:37:41.960 iops : min= 638, max= 672, avg=666.63, stdev=12.17, samples=19 00:37:41.960 lat (msec) : 10=0.48%, 20=0.51%, 50=99.02% 00:37:41.960 cpu : usr=98.57%, sys=0.99%, ctx=85, majf=0, minf=24 00:37:41.960 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename1: (groupid=0, jobs=1): err= 0: pid=911896: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.2MiB/10047msec) 00:37:41.960 slat (nsec): min=5656, max=68191, avg=18416.75, stdev=11221.50 00:37:41.960 clat (usec): min=15609, max=48463, avg=23783.73, stdev=1005.14 00:37:41.960 lat (usec): min=15623, max=48470, avg=23802.15, stdev=1005.01 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.960 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.960 | 99.00th=[25035], 99.50th=[25035], 99.90th=[48497], 99.95th=[48497], 00:37:41.960 | 99.99th=[48497] 00:37:41.960 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2674.21, stdev=40.27, samples=19 00:37:41.960 iops : min= 640, max= 672, avg=668.53, stdev=10.06, samples=19 00:37:41.960 lat (msec) : 20=0.16%, 50=99.84% 00:37:41.960 cpu : usr=98.90%, sys=0.84%, ctx=15, majf=0, minf=29 00:37:41.960 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename2: (groupid=0, jobs=1): err= 0: pid=911897: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10008msec) 00:37:41.960 slat (nsec): min=5669, max=63360, avg=14719.63, stdev=11016.39 00:37:41.960 clat (usec): min=5306, max=28676, avg=23523.80, stdev=2088.84 00:37:41.960 lat (usec): min=5315, max=28685, avg=23538.52, stdev=2088.84 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[ 8094], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.960 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.960 | 99.00th=[24773], 99.50th=[25035], 99.90th=[26084], 99.95th=[28705], 00:37:41.960 | 99.99th=[28705] 00:37:41.960 bw ( KiB/s): min= 2560, max= 3248, per=4.19%, avg=2710.74, stdev=133.37, samples=19 00:37:41.960 iops : min= 640, max= 812, avg=677.68, stdev=33.34, samples=19 00:37:41.960 lat (msec) : 10=1.09%, 20=1.12%, 50=97.79% 00:37:41.960 cpu : usr=98.36%, sys=1.10%, ctx=122, majf=0, minf=32 00:37:41.960 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename2: (groupid=0, jobs=1): err= 0: pid=911898: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10014msec) 00:37:41.960 slat (nsec): min=5660, max=87043, avg=21081.69, stdev=14322.14 00:37:41.960 clat (usec): min=10442, max=50927, avg=23513.88, stdev=3015.87 00:37:41.960 lat (usec): min=10451, max=50943, avg=23534.96, stdev=3017.69 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[15008], 5.00th=[16909], 10.00th=[20579], 20.00th=[23462], 00:37:41.960 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[27395], 00:37:41.960 | 99.00th=[35914], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:37:41.960 | 99.99th=[51119] 00:37:41.960 bw ( KiB/s): min= 2480, max= 2906, per=4.16%, avg=2690.74, stdev=90.76, samples=19 00:37:41.960 iops : min= 620, max= 726, avg=672.63, stdev=22.67, samples=19 00:37:41.960 lat (msec) : 20=9.05%, 50=90.92%, 100=0.03% 00:37:41.960 cpu : usr=98.10%, sys=1.19%, ctx=172, majf=0, minf=32 00:37:41.960 IO depths : 1=4.2%, 2=9.6%, 4=22.2%, 8=55.6%, 16=8.4%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename2: (groupid=0, jobs=1): err= 0: pid=911899: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10003msec) 00:37:41.960 slat (nsec): min=5661, max=87884, avg=21515.07, stdev=14250.88 00:37:41.960 clat (usec): min=6740, max=41865, avg=23669.82, stdev=1571.59 00:37:41.960 lat (usec): min=6745, max=41885, avg=23691.33, stdev=1571.95 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:41.960 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.960 | 99.00th=[25035], 99.50th=[25035], 99.90th=[41681], 99.95th=[41681], 00:37:41.960 | 99.99th=[41681] 00:37:41.960 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2666.84, stdev=48.47, samples=19 00:37:41.960 iops : min= 638, max= 672, avg=666.63, stdev=12.17, samples=19 00:37:41.960 lat (msec) : 10=0.48%, 20=0.48%, 50=99.05% 00:37:41.960 cpu : usr=98.71%, sys=1.01%, ctx=11, majf=0, minf=43 00:37:41.960 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:41.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.960 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.960 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.960 filename2: (groupid=0, jobs=1): err= 0: pid=911900: Wed Nov 20 15:47:29 2024 00:37:41.960 read: IOPS=668, BW=2676KiB/s (2740kB/s)(26.1MiB/10007msec) 00:37:41.960 slat (nsec): min=5660, max=58274, avg=15886.29, stdev=9474.57 00:37:41.960 clat (usec): min=6608, max=40158, avg=23806.93, stdev=1516.24 00:37:41.960 lat (usec): min=6627, max=40176, avg=23822.82, stdev=1516.19 00:37:41.960 clat percentiles (usec): 00:37:41.960 | 1.00th=[18482], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.960 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.960 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.961 | 99.00th=[27395], 99.50th=[30540], 99.90th=[40109], 99.95th=[40109], 00:37:41.961 | 99.99th=[40109] 00:37:41.961 bw ( KiB/s): min= 2560, max= 2736, per=4.12%, avg=2666.05, stdev=51.90, samples=19 00:37:41.961 iops : min= 640, max= 684, avg=666.47, stdev=12.98, samples=19 00:37:41.961 lat (msec) : 10=0.21%, 20=0.97%, 50=98.82% 00:37:41.961 cpu : usr=98.46%, sys=1.05%, ctx=114, majf=0, minf=41 00:37:41.961 IO depths : 1=3.0%, 2=6.0%, 4=12.4%, 8=66.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:37:41.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 complete : 0=0.0%, 4=91.6%, 8=5.5%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.961 filename2: (groupid=0, jobs=1): err= 0: pid=911901: Wed Nov 20 15:47:29 2024 00:37:41.961 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10008msec) 00:37:41.961 slat (nsec): min=5650, max=69130, avg=15224.15, stdev=11452.78 00:37:41.961 clat (usec): min=11672, max=31419, avg=23704.63, stdev=1091.68 00:37:41.961 lat (usec): min=11680, max=31425, avg=23719.85, stdev=1091.38 00:37:41.961 clat percentiles (usec): 00:37:41.961 | 1.00th=[16581], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.961 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.961 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:41.961 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[31327], 00:37:41.961 | 99.99th=[31327] 00:37:41.961 bw ( KiB/s): min= 2560, max= 2821, per=4.16%, avg=2688.26, stdev=43.51, samples=19 00:37:41.961 iops : min= 640, max= 705, avg=672.05, stdev=10.83, samples=19 00:37:41.961 lat (msec) : 20=1.04%, 50=98.96% 00:37:41.961 cpu : usr=97.60%, sys=1.50%, ctx=596, majf=0, minf=37 00:37:41.961 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:41.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.961 filename2: (groupid=0, jobs=1): err= 0: pid=911902: Wed Nov 20 15:47:29 2024 00:37:41.961 read: IOPS=735, BW=2943KiB/s (3014kB/s)(28.8MiB/10002msec) 00:37:41.961 slat (nsec): min=5641, max=65379, avg=10856.20, stdev=7239.51 00:37:41.961 clat (usec): min=8469, max=54877, avg=21683.82, stdev=3970.11 00:37:41.961 lat (usec): min=8475, max=54897, avg=21694.68, stdev=3971.44 00:37:41.961 clat percentiles (usec): 00:37:41.961 | 1.00th=[11469], 5.00th=[14746], 10.00th=[15533], 20.00th=[17171], 00:37:41.961 | 30.00th=[20579], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:41.961 | 70.00th=[23725], 80.00th=[23725], 90.00th=[24249], 95.00th=[24511], 00:37:41.961 | 99.00th=[29230], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:37:41.961 | 99.99th=[54789] 00:37:41.961 bw ( KiB/s): min= 2613, max= 3456, per=4.55%, avg=2943.21, stdev=260.36, samples=19 00:37:41.961 iops : min= 653, max= 864, avg=735.74, stdev=65.11, samples=19 00:37:41.961 lat (msec) : 10=0.37%, 20=27.91%, 50=71.70%, 100=0.03% 00:37:41.961 cpu : usr=99.13%, sys=0.59%, ctx=14, majf=0, minf=33 00:37:41.961 IO depths : 1=0.6%, 2=2.2%, 4=8.2%, 8=74.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:37:41.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 complete : 0=0.0%, 4=90.5%, 8=6.5%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 issued rwts: total=7360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.961 filename2: (groupid=0, jobs=1): err= 0: pid=911903: Wed Nov 20 15:47:29 2024 00:37:41.961 read: IOPS=691, BW=2765KiB/s (2832kB/s)(27.0MiB/10013msec) 00:37:41.961 slat (nsec): min=5642, max=83483, avg=17338.68, stdev=12412.26 00:37:41.961 clat (usec): min=11442, max=38498, avg=23006.80, stdev=3428.96 00:37:41.961 lat (usec): min=11449, max=38525, avg=23024.14, stdev=3431.82 00:37:41.961 clat percentiles (usec): 00:37:41.961 | 1.00th=[14615], 5.00th=[16057], 10.00th=[17171], 20.00th=[22676], 00:37:41.961 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:41.961 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[27919], 00:37:41.961 | 99.00th=[34866], 99.50th=[36439], 99.90th=[37487], 99.95th=[38011], 00:37:41.961 | 99.99th=[38536] 00:37:41.961 bw ( KiB/s): min= 2560, max= 3161, per=4.28%, avg=2766.79, stdev=151.38, samples=19 00:37:41.961 iops : min= 640, max= 790, avg=691.68, stdev=37.81, samples=19 00:37:41.961 lat (msec) : 20=16.12%, 50=83.88% 00:37:41.961 cpu : usr=98.22%, sys=1.22%, ctx=168, majf=0, minf=26 00:37:41.961 IO depths : 1=3.1%, 2=7.2%, 4=18.5%, 8=61.3%, 16=9.8%, 32=0.0%, >=64=0.0% 00:37:41.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 complete : 0=0.0%, 4=92.5%, 8=2.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 issued rwts: total=6922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.961 filename2: (groupid=0, jobs=1): err= 0: pid=911904: Wed Nov 20 15:47:29 2024 00:37:41.961 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10002msec) 00:37:41.961 slat (nsec): min=5657, max=58387, avg=13113.06, stdev=7887.98 00:37:41.961 clat (usec): min=11955, max=29437, avg=23761.46, stdev=889.65 00:37:41.961 lat (usec): min=11963, max=29456, avg=23774.58, stdev=889.56 00:37:41.961 clat percentiles (usec): 00:37:41.961 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:37:41.961 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:41.961 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:37:41.961 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:37:41.961 | 99.99th=[29492] 00:37:41.961 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2681.26, stdev=51.80, samples=19 00:37:41.961 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:37:41.961 lat (msec) : 20=0.72%, 50=99.28% 00:37:41.961 cpu : usr=98.95%, sys=0.77%, ctx=22, majf=0, minf=41 00:37:41.961 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:41.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.961 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.961 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:41.961 00:37:41.961 Run status group 0 (all jobs): 00:37:41.961 READ: bw=63.2MiB/s (66.2MB/s), 2665KiB/s-2943KiB/s (2729kB/s-3014kB/s), io=635MiB (665MB), run=10002-10047msec 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:41.961 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 bdev_null0 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 [2024-11-20 15:47:29.857935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 bdev_null1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:41.962 { 00:37:41.962 "params": { 00:37:41.962 "name": "Nvme$subsystem", 00:37:41.962 "trtype": "$TEST_TRANSPORT", 00:37:41.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:41.962 "adrfam": "ipv4", 00:37:41.962 "trsvcid": "$NVMF_PORT", 00:37:41.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:41.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:41.962 "hdgst": ${hdgst:-false}, 00:37:41.962 "ddgst": ${ddgst:-false} 00:37:41.962 }, 00:37:41.962 "method": "bdev_nvme_attach_controller" 00:37:41.962 } 00:37:41.962 EOF 00:37:41.962 )") 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:41.962 { 00:37:41.962 "params": { 00:37:41.962 "name": "Nvme$subsystem", 00:37:41.962 "trtype": "$TEST_TRANSPORT", 00:37:41.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:41.962 "adrfam": "ipv4", 00:37:41.962 "trsvcid": "$NVMF_PORT", 00:37:41.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:41.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:41.962 "hdgst": ${hdgst:-false}, 00:37:41.962 "ddgst": ${ddgst:-false} 00:37:41.962 }, 00:37:41.962 "method": "bdev_nvme_attach_controller" 00:37:41.962 } 00:37:41.962 EOF 00:37:41.962 )") 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:41.962 "params": { 00:37:41.962 "name": "Nvme0", 00:37:41.962 "trtype": "tcp", 00:37:41.962 "traddr": "10.0.0.2", 00:37:41.962 "adrfam": "ipv4", 00:37:41.962 "trsvcid": "4420", 00:37:41.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:41.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:41.962 "hdgst": false, 00:37:41.962 "ddgst": false 00:37:41.962 }, 00:37:41.962 "method": "bdev_nvme_attach_controller" 00:37:41.962 },{ 00:37:41.962 "params": { 00:37:41.962 "name": "Nvme1", 00:37:41.962 "trtype": "tcp", 00:37:41.962 "traddr": "10.0.0.2", 00:37:41.962 "adrfam": "ipv4", 00:37:41.962 "trsvcid": "4420", 00:37:41.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:41.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:41.962 "hdgst": false, 00:37:41.962 "ddgst": false 00:37:41.962 }, 00:37:41.962 "method": "bdev_nvme_attach_controller" 00:37:41.962 }' 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:41.962 15:47:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.962 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:41.962 ... 00:37:41.962 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:41.962 ... 00:37:41.962 fio-3.35 00:37:41.962 Starting 4 threads 00:37:47.245 00:37:47.245 filename0: (groupid=0, jobs=1): err= 0: pid=914260: Wed Nov 20 15:47:36 2024 00:37:47.245 read: IOPS=2956, BW=23.1MiB/s (24.2MB/s)(116MiB/5003msec) 00:37:47.245 slat (nsec): min=5505, max=56936, avg=8041.14, stdev=3155.68 00:37:47.245 clat (usec): min=952, max=44059, avg=2685.22, stdev=985.14 00:37:47.245 lat (usec): min=961, max=44097, avg=2693.27, stdev=985.33 00:37:47.245 clat percentiles (usec): 00:37:47.245 | 1.00th=[ 1991], 5.00th=[ 2245], 10.00th=[ 2474], 20.00th=[ 2638], 00:37:47.245 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:47.245 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2769], 95.00th=[ 2900], 00:37:47.245 | 99.00th=[ 3523], 99.50th=[ 3654], 99.90th=[ 4424], 99.95th=[43779], 00:37:47.245 | 99.99th=[44303] 00:37:47.245 bw ( KiB/s): min=21712, max=24192, per=25.13%, avg=23635.56, stdev=750.80, samples=9 00:37:47.246 iops : min= 2714, max= 3024, avg=2954.44, stdev=93.85, samples=9 00:37:47.246 lat (usec) : 1000=0.01% 00:37:47.246 lat (msec) : 2=1.01%, 4=98.78%, 10=0.14%, 50=0.05% 00:37:47.246 cpu : usr=96.12%, sys=3.60%, ctx=7, majf=0, minf=31 00:37:47.246 IO depths : 1=0.1%, 2=0.1%, 4=68.9%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:47.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 issued rwts: total=14793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.246 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:47.246 filename0: (groupid=0, jobs=1): err= 0: pid=914262: Wed Nov 20 15:47:36 2024 00:37:47.246 read: IOPS=2933, BW=22.9MiB/s (24.0MB/s)(115MiB/5004msec) 00:37:47.246 slat (nsec): min=5504, max=60578, avg=8069.43, stdev=3083.28 00:37:47.246 clat (usec): min=855, max=7709, avg=2705.37, stdev=209.32 00:37:47.246 lat (usec): min=867, max=7719, avg=2713.44, stdev=209.22 00:37:47.246 clat percentiles (usec): 00:37:47.246 | 1.00th=[ 2073], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2671], 00:37:47.246 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:47.246 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2966], 00:37:47.246 | 99.00th=[ 3425], 99.50th=[ 3687], 99.90th=[ 4424], 99.95th=[ 5407], 00:37:47.246 | 99.99th=[ 7701] 00:37:47.246 bw ( KiB/s): min=23072, max=23856, per=24.96%, avg=23473.60, stdev=214.32, samples=10 00:37:47.246 iops : min= 2884, max= 2982, avg=2934.20, stdev=26.79, samples=10 00:37:47.246 lat (usec) : 1000=0.02% 00:37:47.246 lat (msec) : 2=0.63%, 4=99.07%, 10=0.28% 00:37:47.246 cpu : usr=96.50%, sys=3.22%, ctx=7, majf=0, minf=65 00:37:47.246 IO depths : 1=0.1%, 2=0.2%, 4=71.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:47.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 issued rwts: total=14679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.246 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:47.246 filename1: (groupid=0, jobs=1): err= 0: pid=914263: Wed Nov 20 15:47:36 2024 00:37:47.246 read: IOPS=2941, BW=23.0MiB/s (24.1MB/s)(115MiB/5004msec) 00:37:47.246 slat (nsec): min=8019, max=91174, avg=9873.04, stdev=3586.30 00:37:47.246 clat (usec): min=1320, max=7142, avg=2695.06, stdev=222.59 00:37:47.246 lat (usec): min=1334, max=7151, avg=2704.93, stdev=222.70 00:37:47.246 clat percentiles (usec): 00:37:47.246 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2638], 00:37:47.246 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:47.246 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:37:47.246 | 99.00th=[ 3458], 99.50th=[ 3720], 99.90th=[ 4359], 99.95th=[ 6587], 00:37:47.246 | 99.99th=[ 7111] 00:37:47.246 bw ( KiB/s): min=23376, max=23664, per=25.03%, avg=23542.40, stdev=94.27, samples=10 00:37:47.246 iops : min= 2922, max= 2958, avg=2942.80, stdev=11.78, samples=10 00:37:47.246 lat (msec) : 2=0.67%, 4=99.09%, 10=0.24% 00:37:47.246 cpu : usr=96.00%, sys=3.68%, ctx=36, majf=0, minf=43 00:37:47.246 IO depths : 1=0.1%, 2=0.2%, 4=68.2%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:47.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 issued rwts: total=14719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.246 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:47.246 filename1: (groupid=0, jobs=1): err= 0: pid=914264: Wed Nov 20 15:47:36 2024 00:37:47.246 read: IOPS=2926, BW=22.9MiB/s (24.0MB/s)(114MiB/5003msec) 00:37:47.246 slat (nsec): min=8016, max=88053, avg=9255.81, stdev=3162.47 00:37:47.246 clat (usec): min=1356, max=7940, avg=2707.48, stdev=201.46 00:37:47.246 lat (usec): min=1364, max=7948, avg=2716.74, stdev=201.75 00:37:47.246 clat percentiles (usec): 00:37:47.246 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2671], 00:37:47.246 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:47.246 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:37:47.246 | 99.00th=[ 3392], 99.50th=[ 3687], 99.90th=[ 4752], 99.95th=[ 5735], 00:37:47.246 | 99.99th=[ 7963] 00:37:47.246 bw ( KiB/s): min=23184, max=23632, per=24.90%, avg=23423.80, stdev=132.13, samples=10 00:37:47.246 iops : min= 2898, max= 2954, avg=2927.90, stdev=16.51, samples=10 00:37:47.246 lat (msec) : 2=0.48%, 4=99.17%, 10=0.35% 00:37:47.246 cpu : usr=97.14%, sys=2.58%, ctx=8, majf=0, minf=38 00:37:47.246 IO depths : 1=0.1%, 2=0.1%, 4=73.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:47.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.246 issued rwts: total=14642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.246 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:47.246 00:37:47.246 Run status group 0 (all jobs): 00:37:47.246 READ: bw=91.9MiB/s (96.3MB/s), 22.9MiB/s-23.1MiB/s (24.0MB/s-24.2MB/s), io=460MiB (482MB), run=5003-5004msec 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.507 00:37:47.507 real 0m24.524s 00:37:47.507 user 5m14.450s 00:37:47.507 sys 0m4.792s 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.507 ************************************ 00:37:47.507 END TEST fio_dif_rand_params 00:37:47.507 ************************************ 00:37:47.507 15:47:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:47.507 15:47:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:47.507 15:47:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:47.507 ************************************ 00:37:47.507 START TEST fio_dif_digest 00:37:47.507 ************************************ 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.507 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:47.769 bdev_null0 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:47.769 [2024-11-20 15:47:36.505276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:47.769 { 00:37:47.769 "params": { 00:37:47.769 "name": "Nvme$subsystem", 00:37:47.769 "trtype": "$TEST_TRANSPORT", 00:37:47.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:47.769 "adrfam": "ipv4", 00:37:47.769 "trsvcid": "$NVMF_PORT", 00:37:47.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:47.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:47.769 "hdgst": ${hdgst:-false}, 00:37:47.769 "ddgst": ${ddgst:-false} 00:37:47.769 }, 00:37:47.769 "method": "bdev_nvme_attach_controller" 00:37:47.769 } 00:37:47.769 EOF 00:37:47.769 )") 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:47.769 "params": { 00:37:47.769 "name": "Nvme0", 00:37:47.769 "trtype": "tcp", 00:37:47.769 "traddr": "10.0.0.2", 00:37:47.769 "adrfam": "ipv4", 00:37:47.769 "trsvcid": "4420", 00:37:47.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.769 "hdgst": true, 00:37:47.769 "ddgst": true 00:37:47.769 }, 00:37:47.769 "method": "bdev_nvme_attach_controller" 00:37:47.769 }' 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:47.769 15:47:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:48.030 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:48.030 ... 00:37:48.030 fio-3.35 00:37:48.030 Starting 3 threads 00:38:00.261 00:38:00.261 filename0: (groupid=0, jobs=1): err= 0: pid=915604: Wed Nov 20 15:47:47 2024 00:38:00.261 read: IOPS=309, BW=38.7MiB/s (40.5MB/s)(389MiB/10047msec) 00:38:00.261 slat (nsec): min=5819, max=76992, avg=8241.65, stdev=2180.56 00:38:00.261 clat (usec): min=7165, max=48448, avg=9673.23, stdev=1211.50 00:38:00.261 lat (usec): min=7177, max=48455, avg=9681.47, stdev=1211.45 00:38:00.261 clat percentiles (usec): 00:38:00.261 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:38:00.261 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:38:00.261 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:38:00.261 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12256], 99.95th=[47449], 00:38:00.261 | 99.99th=[48497] 00:38:00.261 bw ( KiB/s): min=38912, max=40192, per=34.39%, avg=39756.80, stdev=311.88, samples=20 00:38:00.261 iops : min= 304, max= 314, avg=310.60, stdev= 2.44, samples=20 00:38:00.261 lat (msec) : 10=69.88%, 20=30.05%, 50=0.06% 00:38:00.261 cpu : usr=93.74%, sys=5.64%, ctx=621, majf=0, minf=184 00:38:00.261 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:00.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.261 issued rwts: total=3108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:00.261 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:00.261 filename0: (groupid=0, jobs=1): err= 0: pid=915605: Wed Nov 20 15:47:47 2024 00:38:00.261 read: IOPS=300, BW=37.5MiB/s (39.4MB/s)(377MiB/10046msec) 00:38:00.261 slat (nsec): min=5948, max=35916, avg=8023.14, stdev=1720.91 00:38:00.261 clat (usec): min=7688, max=48749, avg=9965.58, stdev=1215.76 00:38:00.261 lat (usec): min=7697, max=48756, avg=9973.60, stdev=1215.67 00:38:00.261 clat percentiles (usec): 00:38:00.261 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:38:00.261 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:38:00.261 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:38:00.261 | 99.00th=[11731], 99.50th=[12125], 99.90th=[12387], 99.95th=[46400], 00:38:00.261 | 99.99th=[48497] 00:38:00.261 bw ( KiB/s): min=37632, max=39168, per=33.37%, avg=38579.20, stdev=424.32, samples=20 00:38:00.261 iops : min= 294, max= 306, avg=301.40, stdev= 3.32, samples=20 00:38:00.261 lat (msec) : 10=54.16%, 20=45.77%, 50=0.07% 00:38:00.261 cpu : usr=94.73%, sys=5.03%, ctx=18, majf=0, minf=116 00:38:00.261 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:00.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.262 issued rwts: total=3017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:00.262 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:00.262 filename0: (groupid=0, jobs=1): err= 0: pid=915606: Wed Nov 20 15:47:47 2024 00:38:00.262 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(369MiB/10045msec) 00:38:00.262 slat (nsec): min=5887, max=31479, avg=7687.20, stdev=1624.93 00:38:00.262 clat (usec): min=7733, max=49666, avg=10194.80, stdev=1267.44 00:38:00.262 lat (usec): min=7743, max=49672, avg=10202.49, stdev=1267.41 00:38:00.262 clat percentiles (usec): 00:38:00.262 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:38:00.262 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:38:00.262 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:38:00.262 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13829], 99.95th=[47449], 00:38:00.262 | 99.99th=[49546] 00:38:00.262 bw ( KiB/s): min=37120, max=38400, per=32.63%, avg=37721.60, stdev=313.81, samples=20 00:38:00.262 iops : min= 290, max= 300, avg=294.70, stdev= 2.45, samples=20 00:38:00.262 lat (msec) : 10=43.47%, 20=56.46%, 50=0.07% 00:38:00.262 cpu : usr=94.69%, sys=5.08%, ctx=15, majf=0, minf=79 00:38:00.262 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:00.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:00.262 issued rwts: total=2949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:00.262 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:00.262 00:38:00.262 Run status group 0 (all jobs): 00:38:00.262 READ: bw=113MiB/s (118MB/s), 36.7MiB/s-38.7MiB/s (38.5MB/s-40.5MB/s), io=1134MiB (1189MB), run=10045-10047msec 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.262 00:38:00.262 real 0m11.311s 00:38:00.262 user 0m42.185s 00:38:00.262 sys 0m1.949s 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:00.262 15:47:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:00.262 ************************************ 00:38:00.262 END TEST fio_dif_digest 00:38:00.262 ************************************ 00:38:00.262 15:47:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:00.262 15:47:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.262 rmmod nvme_tcp 00:38:00.262 rmmod nvme_fabrics 00:38:00.262 rmmod nvme_keyring 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 905395 ']' 00:38:00.262 15:47:47 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 905395 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 905395 ']' 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 905395 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905395 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905395' 00:38:00.262 killing process with pid 905395 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@973 -- # kill 905395 00:38:00.262 15:47:47 nvmf_dif -- common/autotest_common.sh@978 -- # wait 905395 00:38:00.262 15:47:48 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:00.262 15:47:48 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:02.806 Waiting for block devices as requested 00:38:02.806 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:02.807 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:02.807 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:02.807 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:02.807 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:03.079 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:03.079 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:03.079 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:03.079 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:03.339 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:03.339 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:03.599 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:03.599 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:03.599 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:03.859 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:03.859 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:03.859 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:04.437 15:47:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.437 15:47:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:04.437 15:47:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.351 15:47:55 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.351 00:38:06.351 real 1m18.455s 00:38:06.351 user 7m53.099s 00:38:06.351 sys 0m22.526s 00:38:06.351 15:47:55 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.351 15:47:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.351 ************************************ 00:38:06.351 END TEST nvmf_dif 00:38:06.351 ************************************ 00:38:06.351 15:47:55 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:06.351 15:47:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:06.351 15:47:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.351 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:38:06.351 ************************************ 00:38:06.351 START TEST nvmf_abort_qd_sizes 00:38:06.351 ************************************ 00:38:06.351 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:06.612 * Looking for test storage... 00:38:06.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:06.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.612 --rc genhtml_branch_coverage=1 00:38:06.612 --rc genhtml_function_coverage=1 00:38:06.612 --rc genhtml_legend=1 00:38:06.612 --rc geninfo_all_blocks=1 00:38:06.612 --rc geninfo_unexecuted_blocks=1 00:38:06.612 00:38:06.612 ' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:06.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.612 --rc genhtml_branch_coverage=1 00:38:06.612 --rc genhtml_function_coverage=1 00:38:06.612 --rc genhtml_legend=1 00:38:06.612 --rc geninfo_all_blocks=1 00:38:06.612 --rc geninfo_unexecuted_blocks=1 00:38:06.612 00:38:06.612 ' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:06.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.612 --rc genhtml_branch_coverage=1 00:38:06.612 --rc genhtml_function_coverage=1 00:38:06.612 --rc genhtml_legend=1 00:38:06.612 --rc geninfo_all_blocks=1 00:38:06.612 --rc geninfo_unexecuted_blocks=1 00:38:06.612 00:38:06.612 ' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:06.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.612 --rc genhtml_branch_coverage=1 00:38:06.612 --rc genhtml_function_coverage=1 00:38:06.612 --rc genhtml_legend=1 00:38:06.612 --rc geninfo_all_blocks=1 00:38:06.612 --rc geninfo_unexecuted_blocks=1 00:38:06.612 00:38:06.612 ' 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.612 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:06.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.613 15:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:14.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:14.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:14.756 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:14.756 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:14.756 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:14.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:14.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:38:14.757 00:38:14.757 --- 10.0.0.2 ping statistics --- 00:38:14.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.757 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:14.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:14.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:38:14.757 00:38:14.757 --- 10.0.0.1 ping statistics --- 00:38:14.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.757 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:14.757 15:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:18.055 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:18.055 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:18.056 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=925139 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 925139 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 925139 ']' 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:18.056 15:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.056 [2024-11-20 15:48:06.974595] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:38:18.056 [2024-11-20 15:48:06.974642] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.317 [2024-11-20 15:48:07.069700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:18.317 [2024-11-20 15:48:07.114770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.317 [2024-11-20 15:48:07.114822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.317 [2024-11-20 15:48:07.114831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.317 [2024-11-20 15:48:07.114838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.317 [2024-11-20 15:48:07.114844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.317 [2024-11-20 15:48:07.116988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.317 [2024-11-20 15:48:07.117122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:18.317 [2024-11-20 15:48:07.117280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.317 [2024-11-20 15:48:07.117281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.890 15:48:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.225 ************************************ 00:38:19.225 START TEST spdk_target_abort 00:38:19.225 ************************************ 00:38:19.225 15:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:19.225 15:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:19.225 15:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:19.225 15:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.225 15:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.520 spdk_targetn1 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.520 [2024-11-20 15:48:08.200940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.520 [2024-11-20 15:48:08.249335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:19.520 15:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.781 [2024-11-20 15:48:08.527771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:232 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.527826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001f p:1 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.535652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:432 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.535683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.535934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:456 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.535951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003a p:1 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:848 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.550865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006e p:1 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.557741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1040 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.557770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.595780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2192 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.595820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.610813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2648 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.610845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.627289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3208 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.627321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0095 p:0 m:0 dnr:0 00:38:19.781 [2024-11-20 15:48:08.648857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3768 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:19.781 [2024-11-20 15:48:08.648889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:38:23.081 Initializing NVMe Controllers 00:38:23.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:23.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:23.081 Initialization complete. Launching workers. 00:38:23.081 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10406, failed: 9 00:38:23.081 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2055, failed to submit 8360 00:38:23.081 success 749, unsuccessful 1306, failed 0 00:38:23.081 15:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:23.081 15:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.081 [2024-11-20 15:48:11.693911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:304 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:38:23.081 [2024-11-20 15:48:11.693956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:38:23.081 [2024-11-20 15:48:11.720408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1008 len:8 PRP1 0x200004e56000 PRP2 0x0 00:38:23.081 [2024-11-20 15:48:11.720433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:38:23.081 [2024-11-20 15:48:11.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2312 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:38:23.081 [2024-11-20 15:48:11.776278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:23.081 [2024-11-20 15:48:11.824299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3512 len:8 PRP1 0x200004e52000 PRP2 0x0 00:38:23.081 [2024-11-20 15:48:11.824323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00ba p:0 m:0 dnr:0 00:38:23.342 [2024-11-20 15:48:12.294371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:14152 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:38:23.342 [2024-11-20 15:48:12.294402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00f3 p:1 m:0 dnr:0 00:38:23.914 [2024-11-20 15:48:12.827344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:26504 len:8 PRP1 0x200004e46000 PRP2 0x0 00:38:23.914 [2024-11-20 15:48:12.827377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:38:24.857 [2024-11-20 15:48:13.676790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:45672 len:8 PRP1 0x200004e60000 PRP2 0x0 00:38:24.857 [2024-11-20 15:48:13.676820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0053 p:1 m:0 dnr:0 00:38:25.429 [2024-11-20 15:48:14.108073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:55424 len:8 PRP1 0x200004e42000 PRP2 0x0 00:38:25.429 [2024-11-20 15:48:14.108096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:25.691 [2024-11-20 15:48:14.563008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:65736 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:38:25.691 [2024-11-20 15:48:14.563033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:38:25.952 Initializing NVMe Controllers 00:38:25.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:25.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:25.952 Initialization complete. Launching workers. 00:38:25.952 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8581, failed: 9 00:38:25.952 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1224, failed to submit 7366 00:38:25.952 success 345, unsuccessful 879, failed 0 00:38:25.952 15:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:25.952 15:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:26.894 [2024-11-20 15:48:15.534215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:155 nsid:1 lba:56432 len:8 PRP1 0x200004ad8000 PRP2 0x0 00:38:26.894 [2024-11-20 15:48:15.534244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:155 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:38:29.438 Initializing NVMe Controllers 00:38:29.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:29.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:29.438 Initialization complete. Launching workers. 00:38:29.438 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43686, failed: 1 00:38:29.438 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2797, failed to submit 40890 00:38:29.438 success 584, unsuccessful 2213, failed 0 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.438 15:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 925139 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 925139 ']' 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 925139 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 925139 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 925139' 00:38:31.350 killing process with pid 925139 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 925139 00:38:31.350 15:48:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 925139 00:38:31.350 00:38:31.350 real 0m12.217s 00:38:31.350 user 0m49.762s 00:38:31.350 sys 0m2.038s 00:38:31.350 15:48:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.350 15:48:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.350 ************************************ 00:38:31.351 END TEST spdk_target_abort 00:38:31.351 ************************************ 00:38:31.351 15:48:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:31.351 15:48:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:31.351 15:48:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.351 15:48:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:31.351 ************************************ 00:38:31.351 START TEST kernel_target_abort 00:38:31.351 ************************************ 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:31.351 15:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:34.656 Waiting for block devices as requested 00:38:34.917 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:34.917 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:34.917 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:34.917 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:35.177 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:35.177 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:35.177 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:35.438 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:35.438 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:35.699 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:35.699 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:35.699 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:35.960 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:35.960 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:35.960 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:36.221 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:36.221 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:36.482 No valid GPT data, bailing 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:36.482 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:36.744 00:38:36.744 Discovery Log Number of Records 2, Generation counter 2 00:38:36.744 =====Discovery Log Entry 0====== 00:38:36.744 trtype: tcp 00:38:36.744 adrfam: ipv4 00:38:36.744 subtype: current discovery subsystem 00:38:36.744 treq: not specified, sq flow control disable supported 00:38:36.744 portid: 1 00:38:36.744 trsvcid: 4420 00:38:36.744 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:36.744 traddr: 10.0.0.1 00:38:36.744 eflags: none 00:38:36.744 sectype: none 00:38:36.744 =====Discovery Log Entry 1====== 00:38:36.744 trtype: tcp 00:38:36.744 adrfam: ipv4 00:38:36.744 subtype: nvme subsystem 00:38:36.744 treq: not specified, sq flow control disable supported 00:38:36.744 portid: 1 00:38:36.744 trsvcid: 4420 00:38:36.744 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:36.744 traddr: 10.0.0.1 00:38:36.744 eflags: none 00:38:36.744 sectype: none 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:36.744 15:48:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:40.046 Initializing NVMe Controllers 00:38:40.046 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:40.046 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:40.046 Initialization complete. Launching workers. 00:38:40.046 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67348, failed: 0 00:38:40.046 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67348, failed to submit 0 00:38:40.046 success 0, unsuccessful 67348, failed 0 00:38:40.046 15:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:40.046 15:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:43.345 Initializing NVMe Controllers 00:38:43.345 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:43.345 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:43.345 Initialization complete. Launching workers. 00:38:43.345 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 116468, failed: 0 00:38:43.345 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29310, failed to submit 87158 00:38:43.345 success 0, unsuccessful 29310, failed 0 00:38:43.345 15:48:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:43.345 15:48:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.647 Initializing NVMe Controllers 00:38:46.647 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:46.647 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:46.647 Initialization complete. Launching workers. 00:38:46.647 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145699, failed: 0 00:38:46.647 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36466, failed to submit 109233 00:38:46.647 success 0, unsuccessful 36466, failed 0 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:46.647 15:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:49.952 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:49.952 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:51.337 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:51.908 00:38:51.908 real 0m20.423s 00:38:51.908 user 0m9.976s 00:38:51.908 sys 0m6.072s 00:38:51.908 15:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.908 15:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.908 ************************************ 00:38:51.908 END TEST kernel_target_abort 00:38:51.908 ************************************ 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:51.908 rmmod nvme_tcp 00:38:51.908 rmmod nvme_fabrics 00:38:51.908 rmmod nvme_keyring 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 925139 ']' 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 925139 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 925139 ']' 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 925139 00:38:51.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (925139) - No such process 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 925139 is not found' 00:38:51.908 Process with pid 925139 is not found 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:51.908 15:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:55.211 Waiting for block devices as requested 00:38:55.211 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:55.472 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:55.472 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:55.472 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:55.472 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:55.732 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:55.732 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:55.732 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:55.993 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:55.993 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:56.253 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:56.253 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:56.253 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:56.513 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:56.513 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:56.513 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:56.773 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:57.035 15:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.581 15:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.581 00:38:59.581 real 0m52.659s 00:38:59.581 user 1m5.077s 00:38:59.581 sys 0m19.447s 00:38:59.581 15:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.581 15:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:59.581 ************************************ 00:38:59.581 END TEST nvmf_abort_qd_sizes 00:38:59.581 ************************************ 00:38:59.581 15:48:47 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:59.581 15:48:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.581 15:48:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.581 15:48:47 -- common/autotest_common.sh@10 -- # set +x 00:38:59.581 ************************************ 00:38:59.581 START TEST keyring_file 00:38:59.581 ************************************ 00:38:59.581 15:48:48 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:59.581 * Looking for test storage... 00:38:59.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:59.581 15:48:48 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:59.581 15:48:48 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:59.581 15:48:48 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:59.581 15:48:48 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:59.581 15:48:48 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:59.581 15:48:48 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:59.582 15:48:48 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:59.582 15:48:48 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:59.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.582 --rc genhtml_branch_coverage=1 00:38:59.582 --rc genhtml_function_coverage=1 00:38:59.582 --rc genhtml_legend=1 00:38:59.582 --rc geninfo_all_blocks=1 00:38:59.582 --rc geninfo_unexecuted_blocks=1 00:38:59.582 00:38:59.582 ' 00:38:59.582 15:48:48 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:59.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.582 --rc genhtml_branch_coverage=1 00:38:59.582 --rc genhtml_function_coverage=1 00:38:59.582 --rc genhtml_legend=1 00:38:59.582 --rc geninfo_all_blocks=1 00:38:59.582 --rc geninfo_unexecuted_blocks=1 00:38:59.582 00:38:59.582 ' 00:38:59.582 15:48:48 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:59.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.582 --rc genhtml_branch_coverage=1 00:38:59.582 --rc genhtml_function_coverage=1 00:38:59.582 --rc genhtml_legend=1 00:38:59.582 --rc geninfo_all_blocks=1 00:38:59.582 --rc geninfo_unexecuted_blocks=1 00:38:59.582 00:38:59.582 ' 00:38:59.582 15:48:48 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:59.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.582 --rc genhtml_branch_coverage=1 00:38:59.582 --rc genhtml_function_coverage=1 00:38:59.582 --rc genhtml_legend=1 00:38:59.582 --rc geninfo_all_blocks=1 00:38:59.582 --rc geninfo_unexecuted_blocks=1 00:38:59.582 00:38:59.582 ' 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.582 15:48:48 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.582 15:48:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.582 15:48:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.582 15:48:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.582 15:48:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:59.582 15:48:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:59.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.m6Zk2FT51f 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.m6Zk2FT51f 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.m6Zk2FT51f 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.m6Zk2FT51f 00:38:59.582 15:48:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uNvIYiVQoo 00:38:59.582 15:48:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:59.582 15:48:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:59.583 15:48:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:59.583 15:48:48 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:59.583 15:48:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:59.583 15:48:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:59.583 15:48:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uNvIYiVQoo 00:38:59.583 15:48:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uNvIYiVQoo 00:38:59.583 15:48:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uNvIYiVQoo 00:38:59.583 15:48:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=936122 00:38:59.583 15:48:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 936122 00:38:59.583 15:48:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:59.583 15:48:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 936122 ']' 00:38:59.583 15:48:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.583 15:48:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.583 15:48:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.583 15:48:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.583 15:48:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:59.583 [2024-11-20 15:48:48.432272] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:38:59.583 [2024-11-20 15:48:48.432351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936122 ] 00:38:59.583 [2024-11-20 15:48:48.525199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.843 [2024-11-20 15:48:48.578088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:00.414 15:48:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.414 [2024-11-20 15:48:49.245188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.414 null0 00:39:00.414 [2024-11-20 15:48:49.277236] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:00.414 [2024-11-20 15:48:49.277590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.414 15:48:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.414 [2024-11-20 15:48:49.309292] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:00.414 request: 00:39:00.414 { 00:39:00.414 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:00.414 "secure_channel": false, 00:39:00.414 "listen_address": { 00:39:00.414 "trtype": "tcp", 00:39:00.414 "traddr": "127.0.0.1", 00:39:00.414 "trsvcid": "4420" 00:39:00.414 }, 00:39:00.414 "method": "nvmf_subsystem_add_listener", 00:39:00.414 "req_id": 1 00:39:00.414 } 00:39:00.414 Got JSON-RPC error response 00:39:00.414 response: 00:39:00.414 { 00:39:00.414 "code": -32602, 00:39:00.414 "message": "Invalid parameters" 00:39:00.414 } 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:00.414 15:48:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=936162 00:39:00.414 15:48:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 936162 /var/tmp/bperf.sock 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 936162 ']' 00:39:00.414 15:48:49 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:00.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.414 15:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.414 [2024-11-20 15:48:49.370079] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:39:00.414 [2024-11-20 15:48:49.370153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936162 ] 00:39:00.675 [2024-11-20 15:48:49.466956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.675 [2024-11-20 15:48:49.519862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.247 15:48:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.247 15:48:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:01.247 15:48:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:01.247 15:48:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:01.509 15:48:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uNvIYiVQoo 00:39:01.509 15:48:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uNvIYiVQoo 00:39:01.769 15:48:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:01.769 15:48:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:01.769 15:48:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.769 15:48:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:01.769 15:48:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.030 15:48:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.m6Zk2FT51f == \/\t\m\p\/\t\m\p\.\m\6\Z\k\2\F\T\5\1\f ]] 00:39:02.030 15:48:50 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:02.030 15:48:50 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:02.030 15:48:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.030 15:48:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.030 15:48:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.030 15:48:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uNvIYiVQoo == \/\t\m\p\/\t\m\p\.\u\N\v\I\Y\i\V\Q\o\o ]] 00:39:02.031 15:48:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:02.031 15:48:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.031 15:48:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.031 15:48:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.031 15:48:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.031 15:48:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.292 15:48:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:02.292 15:48:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:02.292 15:48:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:02.292 15:48:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.292 15:48:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.292 15:48:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.292 15:48:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.553 15:48:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:02.553 15:48:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:02.553 15:48:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:02.814 [2024-11-20 15:48:51.545578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:02.814 nvme0n1 00:39:02.814 15:48:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:02.814 15:48:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.814 15:48:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.814 15:48:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.814 15:48:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.814 15:48:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:03.075 15:48:51 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:03.075 15:48:51 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:03.075 15:48:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:03.075 15:48:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.075 15:48:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.075 15:48:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.075 15:48:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:03.336 15:48:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:03.336 15:48:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:03.336 Running I/O for 1 seconds... 00:39:04.279 17279.00 IOPS, 67.50 MiB/s 00:39:04.279 Latency(us) 00:39:04.279 [2024-11-20T14:48:53.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.279 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:04.279 nvme0n1 : 1.00 17342.42 67.74 0.00 0.00 7368.32 3686.40 13817.17 00:39:04.279 [2024-11-20T14:48:53.239Z] =================================================================================================================== 00:39:04.279 [2024-11-20T14:48:53.239Z] Total : 17342.42 67.74 0.00 0.00 7368.32 3686.40 13817.17 00:39:04.279 { 00:39:04.279 "results": [ 00:39:04.279 { 00:39:04.279 "job": "nvme0n1", 00:39:04.279 "core_mask": "0x2", 00:39:04.279 "workload": "randrw", 00:39:04.279 "percentage": 50, 00:39:04.279 "status": "finished", 00:39:04.279 "queue_depth": 128, 00:39:04.279 "io_size": 4096, 00:39:04.279 "runtime": 1.003724, 00:39:04.279 "iops": 17342.416839688998, 00:39:04.279 "mibps": 67.74381578003515, 00:39:04.279 "io_failed": 0, 00:39:04.279 "io_timeout": 0, 00:39:04.279 "avg_latency_us": 7368.324448784972, 00:39:04.279 "min_latency_us": 3686.4, 00:39:04.279 "max_latency_us": 13817.173333333334 00:39:04.279 } 00:39:04.279 ], 00:39:04.279 "core_count": 1 00:39:04.279 } 00:39:04.279 15:48:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:04.279 15:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:04.540 15:48:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:04.540 15:48:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.540 15:48:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.540 15:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.540 15:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.540 15:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.800 15:48:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:04.800 15:48:53 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:04.800 15:48:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:04.800 15:48:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.800 15:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:04.800 15:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.800 15:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.800 15:48:53 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:04.800 15:48:53 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:04.800 15:48:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.800 15:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:05.060 [2024-11-20 15:48:53.880569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-11-20 15:48:53.880572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77740 (107)ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:05.060 : Transport endpoint is not connected 00:39:05.060 [2024-11-20 15:48:53.881566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77740 (9): Bad file descriptor 00:39:05.060 [2024-11-20 15:48:53.882568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:05.060 [2024-11-20 15:48:53.882578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:05.060 [2024-11-20 15:48:53.882584] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:05.060 [2024-11-20 15:48:53.882591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:05.060 request: 00:39:05.060 { 00:39:05.060 "name": "nvme0", 00:39:05.060 "trtype": "tcp", 00:39:05.060 "traddr": "127.0.0.1", 00:39:05.060 "adrfam": "ipv4", 00:39:05.060 "trsvcid": "4420", 00:39:05.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:05.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:05.060 "prchk_reftag": false, 00:39:05.060 "prchk_guard": false, 00:39:05.060 "hdgst": false, 00:39:05.060 "ddgst": false, 00:39:05.060 "psk": "key1", 00:39:05.060 "allow_unrecognized_csi": false, 00:39:05.060 "method": "bdev_nvme_attach_controller", 00:39:05.061 "req_id": 1 00:39:05.061 } 00:39:05.061 Got JSON-RPC error response 00:39:05.061 response: 00:39:05.061 { 00:39:05.061 "code": -5, 00:39:05.061 "message": "Input/output error" 00:39:05.061 } 00:39:05.061 15:48:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:05.061 15:48:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:05.061 15:48:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:05.061 15:48:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:05.061 15:48:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:05.061 15:48:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.061 15:48:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.061 15:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.061 15:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.061 15:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:05.320 15:48:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:05.321 15:48:54 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:05.321 15:48:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:05.321 15:48:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.321 15:48:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.321 15:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.321 15:48:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.321 15:48:54 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:05.321 15:48:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:05.321 15:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:05.581 15:48:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:05.581 15:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:05.841 15:48:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:05.841 15:48:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:05.841 15:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.841 15:48:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:05.841 15:48:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.m6Zk2FT51f 00:39:05.841 15:48:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.841 15:48:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:05.841 15:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:06.102 [2024-11-20 15:48:54.935258] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.m6Zk2FT51f': 0100660 00:39:06.103 [2024-11-20 15:48:54.935277] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:06.103 request: 00:39:06.103 { 00:39:06.103 "name": "key0", 00:39:06.103 "path": "/tmp/tmp.m6Zk2FT51f", 00:39:06.103 "method": "keyring_file_add_key", 00:39:06.103 "req_id": 1 00:39:06.103 } 00:39:06.103 Got JSON-RPC error response 00:39:06.103 response: 00:39:06.103 { 00:39:06.103 "code": -1, 00:39:06.103 "message": "Operation not permitted" 00:39:06.103 } 00:39:06.103 15:48:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:06.103 15:48:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:06.103 15:48:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:06.103 15:48:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:06.103 15:48:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.m6Zk2FT51f 00:39:06.103 15:48:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:06.103 15:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.m6Zk2FT51f 00:39:06.363 15:48:55 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.m6Zk2FT51f 00:39:06.363 15:48:55 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:06.363 15:48:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:06.363 15:48:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.363 15:48:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.363 15:48:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.363 15:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.624 15:48:55 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:06.624 15:48:55 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.624 15:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.624 [2024-11-20 15:48:55.480648] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.m6Zk2FT51f': No such file or directory 00:39:06.624 [2024-11-20 15:48:55.480662] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:06.624 [2024-11-20 15:48:55.480677] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:06.624 [2024-11-20 15:48:55.480685] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:06.624 [2024-11-20 15:48:55.480691] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:06.624 [2024-11-20 15:48:55.480698] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:06.624 request: 00:39:06.624 { 00:39:06.624 "name": "nvme0", 00:39:06.624 "trtype": "tcp", 00:39:06.624 "traddr": "127.0.0.1", 00:39:06.624 "adrfam": "ipv4", 00:39:06.624 "trsvcid": "4420", 00:39:06.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.624 "prchk_reftag": false, 00:39:06.624 "prchk_guard": false, 00:39:06.624 "hdgst": false, 00:39:06.624 "ddgst": false, 00:39:06.624 "psk": "key0", 00:39:06.624 "allow_unrecognized_csi": false, 00:39:06.624 "method": "bdev_nvme_attach_controller", 00:39:06.624 "req_id": 1 00:39:06.624 } 00:39:06.624 Got JSON-RPC error response 00:39:06.624 response: 00:39:06.624 { 00:39:06.624 "code": -19, 00:39:06.624 "message": "No such device" 00:39:06.624 } 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:06.624 15:48:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:06.624 15:48:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:06.624 15:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:06.885 15:48:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XrE5rLplrH 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:06.885 15:48:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:06.885 15:48:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:06.885 15:48:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:06.885 15:48:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:06.885 15:48:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:06.885 15:48:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XrE5rLplrH 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XrE5rLplrH 00:39:06.885 15:48:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.XrE5rLplrH 00:39:06.885 15:48:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XrE5rLplrH 00:39:06.885 15:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XrE5rLplrH 00:39:07.146 15:48:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.146 15:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.146 nvme0n1 00:39:07.406 15:48:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:07.406 15:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.406 15:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.406 15:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.406 15:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.406 15:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.406 15:48:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:07.406 15:48:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:07.406 15:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:07.666 15:48:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:07.666 15:48:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:07.666 15:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.666 15:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.666 15:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.927 15:48:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:07.927 15:48:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:07.927 15:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.927 15:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.927 15:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.927 15:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.927 15:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.927 15:48:56 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:07.927 15:48:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:07.927 15:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:08.188 15:48:56 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:08.188 15:48:56 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:08.188 15:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.449 15:48:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:08.449 15:48:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XrE5rLplrH 00:39:08.449 15:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XrE5rLplrH 00:39:08.449 15:48:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uNvIYiVQoo 00:39:08.449 15:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uNvIYiVQoo 00:39:08.709 15:48:57 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:08.709 15:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:08.971 nvme0n1 00:39:08.971 15:48:57 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:08.971 15:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:09.231 15:48:57 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:09.232 "subsystems": [ 00:39:09.232 { 00:39:09.232 "subsystem": "keyring", 00:39:09.232 "config": [ 00:39:09.232 { 00:39:09.232 "method": "keyring_file_add_key", 00:39:09.232 "params": { 00:39:09.232 "name": "key0", 00:39:09.232 "path": "/tmp/tmp.XrE5rLplrH" 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "keyring_file_add_key", 00:39:09.232 "params": { 00:39:09.232 "name": "key1", 00:39:09.232 "path": "/tmp/tmp.uNvIYiVQoo" 00:39:09.232 } 00:39:09.232 } 00:39:09.232 ] 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "subsystem": "iobuf", 00:39:09.232 "config": [ 00:39:09.232 { 00:39:09.232 "method": "iobuf_set_options", 00:39:09.232 "params": { 00:39:09.232 "small_pool_count": 8192, 00:39:09.232 "large_pool_count": 1024, 00:39:09.232 "small_bufsize": 8192, 00:39:09.232 "large_bufsize": 135168, 00:39:09.232 "enable_numa": false 00:39:09.232 } 00:39:09.232 } 00:39:09.232 ] 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "subsystem": "sock", 00:39:09.232 "config": [ 00:39:09.232 { 00:39:09.232 "method": "sock_set_default_impl", 00:39:09.232 "params": { 00:39:09.232 "impl_name": "posix" 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "sock_impl_set_options", 00:39:09.232 "params": { 00:39:09.232 "impl_name": "ssl", 00:39:09.232 "recv_buf_size": 4096, 00:39:09.232 "send_buf_size": 4096, 00:39:09.232 "enable_recv_pipe": true, 00:39:09.232 "enable_quickack": false, 00:39:09.232 "enable_placement_id": 0, 00:39:09.232 "enable_zerocopy_send_server": true, 00:39:09.232 "enable_zerocopy_send_client": false, 00:39:09.232 "zerocopy_threshold": 0, 00:39:09.232 "tls_version": 0, 00:39:09.232 "enable_ktls": false 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "sock_impl_set_options", 00:39:09.232 "params": { 00:39:09.232 "impl_name": "posix", 00:39:09.232 "recv_buf_size": 2097152, 00:39:09.232 "send_buf_size": 2097152, 00:39:09.232 "enable_recv_pipe": true, 00:39:09.232 "enable_quickack": false, 00:39:09.232 "enable_placement_id": 0, 00:39:09.232 "enable_zerocopy_send_server": true, 00:39:09.232 "enable_zerocopy_send_client": false, 00:39:09.232 "zerocopy_threshold": 0, 00:39:09.232 "tls_version": 0, 00:39:09.232 "enable_ktls": false 00:39:09.232 } 00:39:09.232 } 00:39:09.232 ] 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "subsystem": "vmd", 00:39:09.232 "config": [] 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "subsystem": "accel", 00:39:09.232 "config": [ 00:39:09.232 { 00:39:09.232 "method": "accel_set_options", 00:39:09.232 "params": { 00:39:09.232 "small_cache_size": 128, 00:39:09.232 "large_cache_size": 16, 00:39:09.232 "task_count": 2048, 00:39:09.232 "sequence_count": 2048, 00:39:09.232 "buf_count": 2048 00:39:09.232 } 00:39:09.232 } 00:39:09.232 ] 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "subsystem": "bdev", 00:39:09.232 "config": [ 00:39:09.232 { 00:39:09.232 "method": "bdev_set_options", 00:39:09.232 "params": { 00:39:09.232 "bdev_io_pool_size": 65535, 00:39:09.232 "bdev_io_cache_size": 256, 00:39:09.232 "bdev_auto_examine": true, 00:39:09.232 "iobuf_small_cache_size": 128, 00:39:09.232 "iobuf_large_cache_size": 16 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "bdev_raid_set_options", 00:39:09.232 "params": { 00:39:09.232 "process_window_size_kb": 1024, 00:39:09.232 "process_max_bandwidth_mb_sec": 0 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "bdev_iscsi_set_options", 00:39:09.232 "params": { 00:39:09.232 "timeout_sec": 30 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "bdev_nvme_set_options", 00:39:09.232 "params": { 00:39:09.232 "action_on_timeout": "none", 00:39:09.232 "timeout_us": 0, 00:39:09.232 "timeout_admin_us": 0, 00:39:09.232 "keep_alive_timeout_ms": 10000, 00:39:09.232 "arbitration_burst": 0, 00:39:09.232 "low_priority_weight": 0, 00:39:09.232 "medium_priority_weight": 0, 00:39:09.232 "high_priority_weight": 0, 00:39:09.232 "nvme_adminq_poll_period_us": 10000, 00:39:09.232 "nvme_ioq_poll_period_us": 0, 00:39:09.232 "io_queue_requests": 512, 00:39:09.232 "delay_cmd_submit": true, 00:39:09.232 "transport_retry_count": 4, 00:39:09.232 "bdev_retry_count": 3, 00:39:09.232 "transport_ack_timeout": 0, 00:39:09.232 "ctrlr_loss_timeout_sec": 0, 00:39:09.232 "reconnect_delay_sec": 0, 00:39:09.232 "fast_io_fail_timeout_sec": 0, 00:39:09.232 "disable_auto_failback": false, 00:39:09.232 "generate_uuids": false, 00:39:09.232 "transport_tos": 0, 00:39:09.232 "nvme_error_stat": false, 00:39:09.232 "rdma_srq_size": 0, 00:39:09.232 "io_path_stat": false, 00:39:09.232 "allow_accel_sequence": false, 00:39:09.232 "rdma_max_cq_size": 0, 00:39:09.232 "rdma_cm_event_timeout_ms": 0, 00:39:09.232 "dhchap_digests": [ 00:39:09.232 "sha256", 00:39:09.232 "sha384", 00:39:09.232 "sha512" 00:39:09.232 ], 00:39:09.232 "dhchap_dhgroups": [ 00:39:09.232 "null", 00:39:09.232 "ffdhe2048", 00:39:09.232 "ffdhe3072", 00:39:09.232 "ffdhe4096", 00:39:09.232 "ffdhe6144", 00:39:09.232 "ffdhe8192" 00:39:09.232 ] 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "bdev_nvme_attach_controller", 00:39:09.232 "params": { 00:39:09.232 "name": "nvme0", 00:39:09.232 "trtype": "TCP", 00:39:09.232 "adrfam": "IPv4", 00:39:09.232 "traddr": "127.0.0.1", 00:39:09.232 "trsvcid": "4420", 00:39:09.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.232 "prchk_reftag": false, 00:39:09.232 "prchk_guard": false, 00:39:09.232 "ctrlr_loss_timeout_sec": 0, 00:39:09.232 "reconnect_delay_sec": 0, 00:39:09.232 "fast_io_fail_timeout_sec": 0, 00:39:09.232 "psk": "key0", 00:39:09.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.232 "hdgst": false, 00:39:09.232 "ddgst": false, 00:39:09.232 "multipath": "multipath" 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "bdev_nvme_set_hotplug", 00:39:09.232 "params": { 00:39:09.232 "period_us": 100000, 00:39:09.232 "enable": false 00:39:09.232 } 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "method": "bdev_wait_for_examine" 00:39:09.232 } 00:39:09.232 ] 00:39:09.232 }, 00:39:09.232 { 00:39:09.232 "subsystem": "nbd", 00:39:09.232 "config": [] 00:39:09.232 } 00:39:09.232 ] 00:39:09.232 }' 00:39:09.232 15:48:57 keyring_file -- keyring/file.sh@115 -- # killprocess 936162 00:39:09.232 15:48:57 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 936162 ']' 00:39:09.232 15:48:57 keyring_file -- common/autotest_common.sh@958 -- # kill -0 936162 00:39:09.232 15:48:57 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:09.232 15:48:57 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:09.232 15:48:57 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936162 00:39:09.232 15:48:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:09.232 15:48:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:09.232 15:48:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936162' 00:39:09.232 killing process with pid 936162 00:39:09.232 15:48:58 keyring_file -- common/autotest_common.sh@973 -- # kill 936162 00:39:09.232 Received shutdown signal, test time was about 1.000000 seconds 00:39:09.232 00:39:09.232 Latency(us) 00:39:09.232 [2024-11-20T14:48:58.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.232 [2024-11-20T14:48:58.192Z] =================================================================================================================== 00:39:09.232 [2024-11-20T14:48:58.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@978 -- # wait 936162 00:39:09.233 15:48:58 keyring_file -- keyring/file.sh@118 -- # bperfpid=937969 00:39:09.233 15:48:58 keyring_file -- keyring/file.sh@120 -- # waitforlisten 937969 /var/tmp/bperf.sock 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 937969 ']' 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:09.233 15:48:58 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:09.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.233 15:48:58 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:09.233 "subsystems": [ 00:39:09.233 { 00:39:09.233 "subsystem": "keyring", 00:39:09.233 "config": [ 00:39:09.233 { 00:39:09.233 "method": "keyring_file_add_key", 00:39:09.233 "params": { 00:39:09.233 "name": "key0", 00:39:09.233 "path": "/tmp/tmp.XrE5rLplrH" 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "keyring_file_add_key", 00:39:09.233 "params": { 00:39:09.233 "name": "key1", 00:39:09.233 "path": "/tmp/tmp.uNvIYiVQoo" 00:39:09.233 } 00:39:09.233 } 00:39:09.233 ] 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "subsystem": "iobuf", 00:39:09.233 "config": [ 00:39:09.233 { 00:39:09.233 "method": "iobuf_set_options", 00:39:09.233 "params": { 00:39:09.233 "small_pool_count": 8192, 00:39:09.233 "large_pool_count": 1024, 00:39:09.233 "small_bufsize": 8192, 00:39:09.233 "large_bufsize": 135168, 00:39:09.233 "enable_numa": false 00:39:09.233 } 00:39:09.233 } 00:39:09.233 ] 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "subsystem": "sock", 00:39:09.233 "config": [ 00:39:09.233 { 00:39:09.233 "method": "sock_set_default_impl", 00:39:09.233 "params": { 00:39:09.233 "impl_name": "posix" 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "sock_impl_set_options", 00:39:09.233 "params": { 00:39:09.233 "impl_name": "ssl", 00:39:09.233 "recv_buf_size": 4096, 00:39:09.233 "send_buf_size": 4096, 00:39:09.233 "enable_recv_pipe": true, 00:39:09.233 "enable_quickack": false, 00:39:09.233 "enable_placement_id": 0, 00:39:09.233 "enable_zerocopy_send_server": true, 00:39:09.233 "enable_zerocopy_send_client": false, 00:39:09.233 "zerocopy_threshold": 0, 00:39:09.233 "tls_version": 0, 00:39:09.233 "enable_ktls": false 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "sock_impl_set_options", 00:39:09.233 "params": { 00:39:09.233 "impl_name": "posix", 00:39:09.233 "recv_buf_size": 2097152, 00:39:09.233 "send_buf_size": 2097152, 00:39:09.233 "enable_recv_pipe": true, 00:39:09.233 "enable_quickack": false, 00:39:09.233 "enable_placement_id": 0, 00:39:09.233 "enable_zerocopy_send_server": true, 00:39:09.233 "enable_zerocopy_send_client": false, 00:39:09.233 "zerocopy_threshold": 0, 00:39:09.233 "tls_version": 0, 00:39:09.233 "enable_ktls": false 00:39:09.233 } 00:39:09.233 } 00:39:09.233 ] 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "subsystem": "vmd", 00:39:09.233 "config": [] 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "subsystem": "accel", 00:39:09.233 "config": [ 00:39:09.233 { 00:39:09.233 "method": "accel_set_options", 00:39:09.233 "params": { 00:39:09.233 "small_cache_size": 128, 00:39:09.233 "large_cache_size": 16, 00:39:09.233 "task_count": 2048, 00:39:09.233 "sequence_count": 2048, 00:39:09.233 "buf_count": 2048 00:39:09.233 } 00:39:09.233 } 00:39:09.233 ] 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "subsystem": "bdev", 00:39:09.233 "config": [ 00:39:09.233 { 00:39:09.233 "method": "bdev_set_options", 00:39:09.233 "params": { 00:39:09.233 "bdev_io_pool_size": 65535, 00:39:09.233 "bdev_io_cache_size": 256, 00:39:09.233 "bdev_auto_examine": true, 00:39:09.233 "iobuf_small_cache_size": 128, 00:39:09.233 "iobuf_large_cache_size": 16 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "bdev_raid_set_options", 00:39:09.233 "params": { 00:39:09.233 "process_window_size_kb": 1024, 00:39:09.233 "process_max_bandwidth_mb_sec": 0 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "bdev_iscsi_set_options", 00:39:09.233 "params": { 00:39:09.233 "timeout_sec": 30 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "bdev_nvme_set_options", 00:39:09.233 "params": { 00:39:09.233 "action_on_timeout": "none", 00:39:09.233 "timeout_us": 0, 00:39:09.233 "timeout_admin_us": 0, 00:39:09.233 "keep_alive_timeout_ms": 10000, 00:39:09.233 "arbitration_burst": 0, 00:39:09.233 "low_priority_weight": 0, 00:39:09.233 "medium_priority_weight": 0, 00:39:09.233 "high_priority_weight": 0, 00:39:09.233 "nvme_adminq_poll_period_us": 10000, 00:39:09.233 "nvme_ioq_poll_period_us": 0, 00:39:09.233 "io_queue_requests": 512, 00:39:09.233 "delay_cmd_submit": true, 00:39:09.233 "transport_retry_count": 4, 00:39:09.233 "bdev_retry_count": 3, 00:39:09.233 "transport_ack_timeout": 0, 00:39:09.233 "ctrlr_loss_timeout_sec": 0, 00:39:09.233 "reconnect_delay_sec": 0, 00:39:09.233 "fast_io_fail_timeout_sec": 0, 00:39:09.233 "disable_auto_failback": false, 00:39:09.233 "generate_uuids": false, 00:39:09.233 "transport_tos": 0, 00:39:09.233 "nvme_error_stat": false, 00:39:09.233 "rdma_srq_size": 0, 00:39:09.233 "io_path_stat": false, 00:39:09.233 "allow_accel_sequence": false, 00:39:09.233 "rdma_max_cq_size": 0, 00:39:09.233 "rdma_cm_event_timeout_ms": 0, 00:39:09.233 "dhchap_digests": [ 00:39:09.233 "sha256", 00:39:09.233 "sha384", 00:39:09.233 "sha512" 00:39:09.233 ], 00:39:09.233 "dhchap_dhgroups": [ 00:39:09.233 "null", 00:39:09.233 "ffdhe2048", 00:39:09.233 "ffdhe3072", 00:39:09.233 "ffdhe4096", 00:39:09.233 "ffdhe6144", 00:39:09.233 "ffdhe8192" 00:39:09.233 ] 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "bdev_nvme_attach_controller", 00:39:09.233 "params": { 00:39:09.233 "name": "nvme0", 00:39:09.233 "trtype": "TCP", 00:39:09.233 "adrfam": "IPv4", 00:39:09.233 "traddr": "127.0.0.1", 00:39:09.233 "trsvcid": "4420", 00:39:09.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.233 "prchk_reftag": false, 00:39:09.233 "prchk_guard": false, 00:39:09.233 "ctrlr_loss_timeout_sec": 0, 00:39:09.233 "reconnect_delay_sec": 0, 00:39:09.233 "fast_io_fail_timeout_sec": 0, 00:39:09.233 "psk": "key0", 00:39:09.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.233 "hdgst": false, 00:39:09.233 "ddgst": false, 00:39:09.233 "multipath": "multipath" 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "bdev_nvme_set_hotplug", 00:39:09.233 "params": { 00:39:09.233 "period_us": 100000, 00:39:09.233 "enable": false 00:39:09.233 } 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "method": "bdev_wait_for_examine" 00:39:09.233 } 00:39:09.233 ] 00:39:09.233 }, 00:39:09.233 { 00:39:09.233 "subsystem": "nbd", 00:39:09.233 "config": [] 00:39:09.233 } 00:39:09.233 ] 00:39:09.233 }' 00:39:09.233 15:48:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:09.494 [2024-11-20 15:48:58.209722] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:39:09.494 [2024-11-20 15:48:58.209777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937969 ] 00:39:09.494 [2024-11-20 15:48:58.293292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.494 [2024-11-20 15:48:58.322175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.754 [2024-11-20 15:48:58.465299] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:10.323 15:48:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.323 15:48:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:10.323 15:48:58 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:10.323 15:48:58 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:10.323 15:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.323 15:48:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:10.323 15:48:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:10.323 15:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.323 15:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.324 15:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.324 15:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.324 15:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.584 15:48:59 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:10.584 15:48:59 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:10.584 15:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:10.584 15:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.584 15:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.584 15:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.584 15:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:10.844 15:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XrE5rLplrH /tmp/tmp.uNvIYiVQoo 00:39:10.844 15:48:59 keyring_file -- keyring/file.sh@20 -- # killprocess 937969 00:39:10.844 15:48:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 937969 ']' 00:39:10.844 15:48:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 937969 00:39:10.844 15:48:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:10.845 15:48:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:10.845 15:48:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 937969 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 937969' 00:39:11.105 killing process with pid 937969 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@973 -- # kill 937969 00:39:11.105 Received shutdown signal, test time was about 1.000000 seconds 00:39:11.105 00:39:11.105 Latency(us) 00:39:11.105 [2024-11-20T14:49:00.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.105 [2024-11-20T14:49:00.065Z] =================================================================================================================== 00:39:11.105 [2024-11-20T14:49:00.065Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@978 -- # wait 937969 00:39:11.105 15:48:59 keyring_file -- keyring/file.sh@21 -- # killprocess 936122 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 936122 ']' 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 936122 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936122 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936122' 00:39:11.105 killing process with pid 936122 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@973 -- # kill 936122 00:39:11.105 15:48:59 keyring_file -- common/autotest_common.sh@978 -- # wait 936122 00:39:11.366 00:39:11.366 real 0m12.165s 00:39:11.366 user 0m29.190s 00:39:11.366 sys 0m2.887s 00:39:11.366 15:49:00 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.366 15:49:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:11.366 ************************************ 00:39:11.366 END TEST keyring_file 00:39:11.366 ************************************ 00:39:11.366 15:49:00 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:11.366 15:49:00 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:11.366 15:49:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:11.366 15:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.366 15:49:00 -- common/autotest_common.sh@10 -- # set +x 00:39:11.366 ************************************ 00:39:11.366 START TEST keyring_linux 00:39:11.366 ************************************ 00:39:11.366 15:49:00 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:11.366 Joined session keyring: 591493737 00:39:11.628 * Looking for test storage... 00:39:11.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:11.628 15:49:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.628 --rc genhtml_branch_coverage=1 00:39:11.628 --rc genhtml_function_coverage=1 00:39:11.628 --rc genhtml_legend=1 00:39:11.628 --rc geninfo_all_blocks=1 00:39:11.628 --rc geninfo_unexecuted_blocks=1 00:39:11.628 00:39:11.628 ' 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.628 --rc genhtml_branch_coverage=1 00:39:11.628 --rc genhtml_function_coverage=1 00:39:11.628 --rc genhtml_legend=1 00:39:11.628 --rc geninfo_all_blocks=1 00:39:11.628 --rc geninfo_unexecuted_blocks=1 00:39:11.628 00:39:11.628 ' 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.628 --rc genhtml_branch_coverage=1 00:39:11.628 --rc genhtml_function_coverage=1 00:39:11.628 --rc genhtml_legend=1 00:39:11.628 --rc geninfo_all_blocks=1 00:39:11.628 --rc geninfo_unexecuted_blocks=1 00:39:11.628 00:39:11.628 ' 00:39:11.628 15:49:00 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.628 --rc genhtml_branch_coverage=1 00:39:11.628 --rc genhtml_function_coverage=1 00:39:11.628 --rc genhtml_legend=1 00:39:11.628 --rc geninfo_all_blocks=1 00:39:11.628 --rc geninfo_unexecuted_blocks=1 00:39:11.628 00:39:11.628 ' 00:39:11.628 15:49:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:11.628 15:49:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.628 15:49:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:11.628 15:49:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.628 15:49:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.629 15:49:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:11.629 15:49:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.629 15:49:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.629 15:49:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.629 15:49:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.629 15:49:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.629 15:49:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.629 15:49:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:11.629 15:49:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:11.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:11.629 /tmp/:spdk-test:key0 00:39:11.629 15:49:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:11.629 15:49:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:11.629 15:49:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:11.890 15:49:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:11.890 15:49:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:11.890 /tmp/:spdk-test:key1 00:39:11.890 15:49:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:11.890 15:49:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=938429 00:39:11.890 15:49:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 938429 00:39:11.890 15:49:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 938429 ']' 00:39:11.890 15:49:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.890 15:49:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.890 15:49:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.890 15:49:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.890 15:49:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:11.890 [2024-11-20 15:49:00.637042] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:39:11.890 [2024-11-20 15:49:00.637120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938429 ] 00:39:11.890 [2024-11-20 15:49:00.723281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.890 [2024-11-20 15:49:00.758876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:12.830 15:49:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:12.830 [2024-11-20 15:49:01.439485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.830 null0 00:39:12.830 [2024-11-20 15:49:01.471545] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:12.830 [2024-11-20 15:49:01.471892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.830 15:49:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:12.830 951352024 00:39:12.830 15:49:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:12.830 681121561 00:39:12.830 15:49:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=938738 00:39:12.830 15:49:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 938738 /var/tmp/bperf.sock 00:39:12.830 15:49:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 938738 ']' 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:12.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.830 15:49:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:12.830 [2024-11-20 15:49:01.549392] Starting SPDK v25.01-pre git sha1 32c3f377c / DPDK 24.03.0 initialization... 00:39:12.830 [2024-11-20 15:49:01.549447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938738 ] 00:39:12.830 [2024-11-20 15:49:01.631387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.830 [2024-11-20 15:49:01.661273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.401 15:49:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.401 15:49:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:13.401 15:49:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:13.401 15:49:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:13.662 15:49:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:13.662 15:49:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:13.923 15:49:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:13.923 15:49:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:13.923 [2024-11-20 15:49:02.869500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:14.184 nvme0n1 00:39:14.184 15:49:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:14.184 15:49:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:14.184 15:49:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:14.184 15:49:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:14.184 15:49:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:14.184 15:49:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.184 15:49:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:14.184 15:49:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:14.184 15:49:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:14.184 15:49:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:14.184 15:49:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.184 15:49:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:14.184 15:49:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@25 -- # sn=951352024 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 951352024 == \9\5\1\3\5\2\0\2\4 ]] 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 951352024 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:14.445 15:49:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:14.705 Running I/O for 1 seconds... 00:39:15.771 24327.00 IOPS, 95.03 MiB/s 00:39:15.771 Latency(us) 00:39:15.771 [2024-11-20T14:49:04.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.771 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:15.771 nvme0n1 : 1.01 24327.98 95.03 0.00 0.00 5246.00 4314.45 10048.85 00:39:15.771 [2024-11-20T14:49:04.731Z] =================================================================================================================== 00:39:15.771 [2024-11-20T14:49:04.731Z] Total : 24327.98 95.03 0.00 0.00 5246.00 4314.45 10048.85 00:39:15.771 { 00:39:15.771 "results": [ 00:39:15.771 { 00:39:15.771 "job": "nvme0n1", 00:39:15.771 "core_mask": "0x2", 00:39:15.771 "workload": "randread", 00:39:15.771 "status": "finished", 00:39:15.771 "queue_depth": 128, 00:39:15.771 "io_size": 4096, 00:39:15.771 "runtime": 1.005221, 00:39:15.771 "iops": 24327.98359763674, 00:39:15.771 "mibps": 95.03118592826851, 00:39:15.771 "io_failed": 0, 00:39:15.771 "io_timeout": 0, 00:39:15.772 "avg_latency_us": 5246.003712942139, 00:39:15.772 "min_latency_us": 4314.453333333333, 00:39:15.772 "max_latency_us": 10048.853333333333 00:39:15.772 } 00:39:15.772 ], 00:39:15.772 "core_count": 1 00:39:15.772 } 00:39:15.772 15:49:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:15.772 15:49:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:15.772 15:49:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:15.772 15:49:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:15.772 15:49:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:15.772 15:49:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:15.772 15:49:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:15.772 15:49:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.049 15:49:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:16.049 15:49:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:16.049 15:49:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:16.049 15:49:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:16.049 15:49:04 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.050 15:49:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.050 [2024-11-20 15:49:04.970642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:16.050 [2024-11-20 15:49:04.971252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca9ba0 (107): Transport endpoint is not connected 00:39:16.050 [2024-11-20 15:49:04.972248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca9ba0 (9): Bad file descriptor 00:39:16.050 [2024-11-20 15:49:04.973250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:16.050 [2024-11-20 15:49:04.973263] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:16.050 [2024-11-20 15:49:04.973269] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:16.050 [2024-11-20 15:49:04.973275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:16.050 request: 00:39:16.050 { 00:39:16.050 "name": "nvme0", 00:39:16.050 "trtype": "tcp", 00:39:16.050 "traddr": "127.0.0.1", 00:39:16.050 "adrfam": "ipv4", 00:39:16.050 "trsvcid": "4420", 00:39:16.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:16.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:16.050 "prchk_reftag": false, 00:39:16.050 "prchk_guard": false, 00:39:16.050 "hdgst": false, 00:39:16.050 "ddgst": false, 00:39:16.050 "psk": ":spdk-test:key1", 00:39:16.050 "allow_unrecognized_csi": false, 00:39:16.050 "method": "bdev_nvme_attach_controller", 00:39:16.050 "req_id": 1 00:39:16.050 } 00:39:16.050 Got JSON-RPC error response 00:39:16.050 response: 00:39:16.050 { 00:39:16.050 "code": -5, 00:39:16.050 "message": "Input/output error" 00:39:16.050 } 00:39:16.050 15:49:04 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:16.050 15:49:04 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:16.050 15:49:04 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:16.050 15:49:04 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@33 -- # sn=951352024 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 951352024 00:39:16.050 1 links removed 00:39:16.050 15:49:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:16.050 15:49:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:16.050 15:49:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:16.050 15:49:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:16.050 15:49:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:16.310 15:49:05 keyring_linux -- keyring/linux.sh@33 -- # sn=681121561 00:39:16.310 15:49:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 681121561 00:39:16.310 1 links removed 00:39:16.310 15:49:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 938738 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 938738 ']' 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 938738 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938738 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938738' 00:39:16.310 killing process with pid 938738 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@973 -- # kill 938738 00:39:16.310 Received shutdown signal, test time was about 1.000000 seconds 00:39:16.310 00:39:16.310 Latency(us) 00:39:16.310 [2024-11-20T14:49:05.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.310 [2024-11-20T14:49:05.270Z] =================================================================================================================== 00:39:16.310 [2024-11-20T14:49:05.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@978 -- # wait 938738 00:39:16.310 15:49:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 938429 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 938429 ']' 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 938429 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938429 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938429' 00:39:16.310 killing process with pid 938429 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@973 -- # kill 938429 00:39:16.310 15:49:05 keyring_linux -- common/autotest_common.sh@978 -- # wait 938429 00:39:16.571 00:39:16.571 real 0m5.184s 00:39:16.571 user 0m9.616s 00:39:16.571 sys 0m1.420s 00:39:16.571 15:49:05 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:16.571 15:49:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:16.571 ************************************ 00:39:16.571 END TEST keyring_linux 00:39:16.571 ************************************ 00:39:16.571 15:49:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:16.571 15:49:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:16.571 15:49:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:16.571 15:49:05 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:16.571 15:49:05 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:16.571 15:49:05 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:16.571 15:49:05 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:16.571 15:49:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:16.571 15:49:05 -- common/autotest_common.sh@10 -- # set +x 00:39:16.571 15:49:05 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:16.571 15:49:05 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:16.571 15:49:05 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:16.571 15:49:05 -- common/autotest_common.sh@10 -- # set +x 00:39:24.728 INFO: APP EXITING 00:39:24.728 INFO: killing all VMs 00:39:24.728 INFO: killing vhost app 00:39:24.728 WARN: no vhost pid file found 00:39:24.728 INFO: EXIT DONE 00:39:28.033 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:28.033 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:28.033 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:32.243 Cleaning 00:39:32.243 Removing: /var/run/dpdk/spdk0/config 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:32.243 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:32.243 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:32.243 Removing: /var/run/dpdk/spdk1/config 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:32.243 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:32.243 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:32.243 Removing: /var/run/dpdk/spdk2/config 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:32.243 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:32.243 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:32.243 Removing: /var/run/dpdk/spdk3/config 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:32.243 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:32.243 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:32.243 Removing: /var/run/dpdk/spdk4/config 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:32.243 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:32.243 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:32.243 Removing: /dev/shm/bdev_svc_trace.1 00:39:32.243 Removing: /dev/shm/nvmf_trace.0 00:39:32.243 Removing: /dev/shm/spdk_tgt_trace.pid360642 00:39:32.243 Removing: /var/run/dpdk/spdk0 00:39:32.243 Removing: /var/run/dpdk/spdk1 00:39:32.243 Removing: /var/run/dpdk/spdk2 00:39:32.243 Removing: /var/run/dpdk/spdk3 00:39:32.243 Removing: /var/run/dpdk/spdk4 00:39:32.243 Removing: /var/run/dpdk/spdk_pid359155 00:39:32.244 Removing: /var/run/dpdk/spdk_pid360642 00:39:32.244 Removing: /var/run/dpdk/spdk_pid361492 00:39:32.244 Removing: /var/run/dpdk/spdk_pid362531 00:39:32.244 Removing: /var/run/dpdk/spdk_pid362871 00:39:32.244 Removing: /var/run/dpdk/spdk_pid363942 00:39:32.244 Removing: /var/run/dpdk/spdk_pid364114 00:39:32.244 Removing: /var/run/dpdk/spdk_pid364413 00:39:32.244 Removing: /var/run/dpdk/spdk_pid365561 00:39:32.244 Removing: /var/run/dpdk/spdk_pid366271 00:39:32.244 Removing: /var/run/dpdk/spdk_pid366612 00:39:32.244 Removing: /var/run/dpdk/spdk_pid366952 00:39:32.244 Removing: /var/run/dpdk/spdk_pid367326 00:39:32.244 Removing: /var/run/dpdk/spdk_pid367963 00:39:32.244 Removing: /var/run/dpdk/spdk_pid368509 00:39:32.244 Removing: /var/run/dpdk/spdk_pid368894 00:39:32.244 Removing: /var/run/dpdk/spdk_pid369157 00:39:32.244 Removing: /var/run/dpdk/spdk_pid370358 00:39:32.244 Removing: /var/run/dpdk/spdk_pid373773 00:39:32.244 Removing: /var/run/dpdk/spdk_pid374122 00:39:32.244 Removing: /var/run/dpdk/spdk_pid374361 00:39:32.244 Removing: /var/run/dpdk/spdk_pid374495 00:39:32.244 Removing: /var/run/dpdk/spdk_pid375023 00:39:32.244 Removing: /var/run/dpdk/spdk_pid375075 00:39:32.244 Removing: /var/run/dpdk/spdk_pid375466 00:39:32.244 Removing: /var/run/dpdk/spdk_pid375779 00:39:32.244 Removing: /var/run/dpdk/spdk_pid376133 00:39:32.244 Removing: /var/run/dpdk/spdk_pid376161 00:39:32.244 Removing: /var/run/dpdk/spdk_pid376521 00:39:32.244 Removing: /var/run/dpdk/spdk_pid376538 00:39:32.244 Removing: /var/run/dpdk/spdk_pid377174 00:39:32.244 Removing: /var/run/dpdk/spdk_pid377344 00:39:32.244 Removing: /var/run/dpdk/spdk_pid377741 00:39:32.244 Removing: /var/run/dpdk/spdk_pid382523 00:39:32.244 Removing: /var/run/dpdk/spdk_pid387699 00:39:32.244 Removing: /var/run/dpdk/spdk_pid400020 00:39:32.244 Removing: /var/run/dpdk/spdk_pid400699 00:39:32.244 Removing: /var/run/dpdk/spdk_pid406065 00:39:32.244 Removing: /var/run/dpdk/spdk_pid406447 00:39:32.244 Removing: /var/run/dpdk/spdk_pid411540 00:39:32.244 Removing: /var/run/dpdk/spdk_pid418727 00:39:32.244 Removing: /var/run/dpdk/spdk_pid422530 00:39:32.244 Removing: /var/run/dpdk/spdk_pid434987 00:39:32.244 Removing: /var/run/dpdk/spdk_pid445879 00:39:32.244 Removing: /var/run/dpdk/spdk_pid447912 00:39:32.244 Removing: /var/run/dpdk/spdk_pid449180 00:39:32.244 Removing: /var/run/dpdk/spdk_pid470018 00:39:32.244 Removing: /var/run/dpdk/spdk_pid475440 00:39:32.244 Removing: /var/run/dpdk/spdk_pid532062 00:39:32.244 Removing: /var/run/dpdk/spdk_pid538567 00:39:32.244 Removing: /var/run/dpdk/spdk_pid545619 00:39:32.244 Removing: /var/run/dpdk/spdk_pid553516 00:39:32.244 Removing: /var/run/dpdk/spdk_pid553518 00:39:32.244 Removing: /var/run/dpdk/spdk_pid554520 00:39:32.244 Removing: /var/run/dpdk/spdk_pid555521 00:39:32.244 Removing: /var/run/dpdk/spdk_pid556537 00:39:32.244 Removing: /var/run/dpdk/spdk_pid557210 00:39:32.244 Removing: /var/run/dpdk/spdk_pid557212 00:39:32.244 Removing: /var/run/dpdk/spdk_pid557544 00:39:32.244 Removing: /var/run/dpdk/spdk_pid557594 00:39:32.244 Removing: /var/run/dpdk/spdk_pid557724 00:39:32.244 Removing: /var/run/dpdk/spdk_pid558803 00:39:32.244 Removing: /var/run/dpdk/spdk_pid559826 00:39:32.244 Removing: /var/run/dpdk/spdk_pid560895 00:39:32.244 Removing: /var/run/dpdk/spdk_pid561552 00:39:32.244 Removing: /var/run/dpdk/spdk_pid561569 00:39:32.244 Removing: /var/run/dpdk/spdk_pid561907 00:39:32.244 Removing: /var/run/dpdk/spdk_pid563253 00:39:32.244 Removing: /var/run/dpdk/spdk_pid564426 00:39:32.244 Removing: /var/run/dpdk/spdk_pid574701 00:39:32.244 Removing: /var/run/dpdk/spdk_pid608888 00:39:32.244 Removing: /var/run/dpdk/spdk_pid614396 00:39:32.244 Removing: /var/run/dpdk/spdk_pid616866 00:39:32.244 Removing: /var/run/dpdk/spdk_pid618967 00:39:32.244 Removing: /var/run/dpdk/spdk_pid619225 00:39:32.244 Removing: /var/run/dpdk/spdk_pid619472 00:39:32.244 Removing: /var/run/dpdk/spdk_pid619644 00:39:32.244 Removing: /var/run/dpdk/spdk_pid620413 00:39:32.244 Removing: /var/run/dpdk/spdk_pid622645 00:39:32.244 Removing: /var/run/dpdk/spdk_pid623709 00:39:32.244 Removing: /var/run/dpdk/spdk_pid624198 00:39:32.244 Removing: /var/run/dpdk/spdk_pid626824 00:39:32.506 Removing: /var/run/dpdk/spdk_pid627525 00:39:32.506 Removing: /var/run/dpdk/spdk_pid628259 00:39:32.506 Removing: /var/run/dpdk/spdk_pid633297 00:39:32.506 Removing: /var/run/dpdk/spdk_pid640006 00:39:32.506 Removing: /var/run/dpdk/spdk_pid640007 00:39:32.506 Removing: /var/run/dpdk/spdk_pid640008 00:39:32.506 Removing: /var/run/dpdk/spdk_pid644698 00:39:32.506 Removing: /var/run/dpdk/spdk_pid654938 00:39:32.506 Removing: /var/run/dpdk/spdk_pid659771 00:39:32.506 Removing: /var/run/dpdk/spdk_pid667665 00:39:32.506 Removing: /var/run/dpdk/spdk_pid669125 00:39:32.506 Removing: /var/run/dpdk/spdk_pid670910 00:39:32.506 Removing: /var/run/dpdk/spdk_pid672434 00:39:32.506 Removing: /var/run/dpdk/spdk_pid678125 00:39:32.506 Removing: /var/run/dpdk/spdk_pid683432 00:39:32.506 Removing: /var/run/dpdk/spdk_pid688325 00:39:32.506 Removing: /var/run/dpdk/spdk_pid697561 00:39:32.506 Removing: /var/run/dpdk/spdk_pid697682 00:39:32.506 Removing: /var/run/dpdk/spdk_pid702771 00:39:32.506 Removing: /var/run/dpdk/spdk_pid703103 00:39:32.506 Removing: /var/run/dpdk/spdk_pid703278 00:39:32.506 Removing: /var/run/dpdk/spdk_pid703776 00:39:32.506 Removing: /var/run/dpdk/spdk_pid703781 00:39:32.506 Removing: /var/run/dpdk/spdk_pid709313 00:39:32.506 Removing: /var/run/dpdk/spdk_pid709986 00:39:32.506 Removing: /var/run/dpdk/spdk_pid715354 00:39:32.506 Removing: /var/run/dpdk/spdk_pid718627 00:39:32.506 Removing: /var/run/dpdk/spdk_pid725790 00:39:32.506 Removing: /var/run/dpdk/spdk_pid732518 00:39:32.506 Removing: /var/run/dpdk/spdk_pid742616 00:39:32.506 Removing: /var/run/dpdk/spdk_pid751262 00:39:32.506 Removing: /var/run/dpdk/spdk_pid751265 00:39:32.506 Removing: /var/run/dpdk/spdk_pid774697 00:39:32.506 Removing: /var/run/dpdk/spdk_pid775475 00:39:32.506 Removing: /var/run/dpdk/spdk_pid776345 00:39:32.506 Removing: /var/run/dpdk/spdk_pid776920 00:39:32.506 Removing: /var/run/dpdk/spdk_pid777881 00:39:32.506 Removing: /var/run/dpdk/spdk_pid778711 00:39:32.506 Removing: /var/run/dpdk/spdk_pid779462 00:39:32.506 Removing: /var/run/dpdk/spdk_pid780181 00:39:32.506 Removing: /var/run/dpdk/spdk_pid785231 00:39:32.506 Removing: /var/run/dpdk/spdk_pid785572 00:39:32.506 Removing: /var/run/dpdk/spdk_pid792860 00:39:32.506 Removing: /var/run/dpdk/spdk_pid793010 00:39:32.506 Removing: /var/run/dpdk/spdk_pid799572 00:39:32.506 Removing: /var/run/dpdk/spdk_pid804789 00:39:32.506 Removing: /var/run/dpdk/spdk_pid816196 00:39:32.506 Removing: /var/run/dpdk/spdk_pid816948 00:39:32.506 Removing: /var/run/dpdk/spdk_pid822195 00:39:32.506 Removing: /var/run/dpdk/spdk_pid822846 00:39:32.506 Removing: /var/run/dpdk/spdk_pid828083 00:39:32.506 Removing: /var/run/dpdk/spdk_pid834851 00:39:32.506 Removing: /var/run/dpdk/spdk_pid837923 00:39:32.506 Removing: /var/run/dpdk/spdk_pid850092 00:39:32.506 Removing: /var/run/dpdk/spdk_pid860779 00:39:32.506 Removing: /var/run/dpdk/spdk_pid862774 00:39:32.506 Removing: /var/run/dpdk/spdk_pid863789 00:39:32.768 Removing: /var/run/dpdk/spdk_pid883955 00:39:32.768 Removing: /var/run/dpdk/spdk_pid888678 00:39:32.768 Removing: /var/run/dpdk/spdk_pid891854 00:39:32.768 Removing: /var/run/dpdk/spdk_pid899617 00:39:32.768 Removing: /var/run/dpdk/spdk_pid899622 00:39:32.768 Removing: /var/run/dpdk/spdk_pid905497 00:39:32.768 Removing: /var/run/dpdk/spdk_pid907864 00:39:32.768 Removing: /var/run/dpdk/spdk_pid910218 00:39:32.768 Removing: /var/run/dpdk/spdk_pid911438 00:39:32.768 Removing: /var/run/dpdk/spdk_pid913933 00:39:32.768 Removing: /var/run/dpdk/spdk_pid915359 00:39:32.768 Removing: /var/run/dpdk/spdk_pid925506 00:39:32.768 Removing: /var/run/dpdk/spdk_pid926628 00:39:32.768 Removing: /var/run/dpdk/spdk_pid927194 00:39:32.768 Removing: /var/run/dpdk/spdk_pid930040 00:39:32.768 Removing: /var/run/dpdk/spdk_pid930604 00:39:32.768 Removing: /var/run/dpdk/spdk_pid931272 00:39:32.768 Removing: /var/run/dpdk/spdk_pid936122 00:39:32.768 Removing: /var/run/dpdk/spdk_pid936162 00:39:32.768 Removing: /var/run/dpdk/spdk_pid937969 00:39:32.768 Removing: /var/run/dpdk/spdk_pid938429 00:39:32.768 Removing: /var/run/dpdk/spdk_pid938738 00:39:32.768 Clean 00:39:32.768 15:49:21 -- common/autotest_common.sh@1453 -- # return 0 00:39:32.768 15:49:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:32.768 15:49:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.769 15:49:21 -- common/autotest_common.sh@10 -- # set +x 00:39:32.769 15:49:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:32.769 15:49:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.769 15:49:21 -- common/autotest_common.sh@10 -- # set +x 00:39:33.029 15:49:21 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:33.029 15:49:21 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:33.029 15:49:21 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:33.029 15:49:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:33.029 15:49:21 -- spdk/autotest.sh@398 -- # hostname 00:39:33.029 15:49:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:33.029 geninfo: WARNING: invalid characters removed from testname! 00:39:59.613 15:49:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:01.525 15:49:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:03.433 15:49:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:04.814 15:49:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:06.723 15:49:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:08.109 15:49:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:10.020 15:49:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:10.020 15:49:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:10.020 15:49:58 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:10.020 15:49:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:10.020 15:49:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:10.020 15:49:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:10.020 + [[ -n 273768 ]] 00:40:10.020 + sudo kill 273768 00:40:10.031 [Pipeline] } 00:40:10.048 [Pipeline] // stage 00:40:10.054 [Pipeline] } 00:40:10.070 [Pipeline] // timeout 00:40:10.076 [Pipeline] } 00:40:10.090 [Pipeline] // catchError 00:40:10.096 [Pipeline] } 00:40:10.112 [Pipeline] // wrap 00:40:10.118 [Pipeline] } 00:40:10.133 [Pipeline] // catchError 00:40:10.142 [Pipeline] stage 00:40:10.145 [Pipeline] { (Epilogue) 00:40:10.160 [Pipeline] catchError 00:40:10.162 [Pipeline] { 00:40:10.177 [Pipeline] echo 00:40:10.179 Cleanup processes 00:40:10.186 [Pipeline] sh 00:40:10.478 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:10.478 951746 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:10.492 [Pipeline] sh 00:40:10.776 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:10.776 ++ grep -v 'sudo pgrep' 00:40:10.776 ++ awk '{print $1}' 00:40:10.776 + sudo kill -9 00:40:10.776 + true 00:40:10.790 [Pipeline] sh 00:40:11.078 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:23.321 [Pipeline] sh 00:40:23.610 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:23.610 Artifacts sizes are good 00:40:23.626 [Pipeline] archiveArtifacts 00:40:23.634 Archiving artifacts 00:40:23.807 [Pipeline] sh 00:40:24.192 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:24.209 [Pipeline] cleanWs 00:40:24.221 [WS-CLEANUP] Deleting project workspace... 00:40:24.221 [WS-CLEANUP] Deferred wipeout is used... 00:40:24.229 [WS-CLEANUP] done 00:40:24.231 [Pipeline] } 00:40:24.250 [Pipeline] // catchError 00:40:24.264 [Pipeline] sh 00:40:24.555 + logger -p user.info -t JENKINS-CI 00:40:24.566 [Pipeline] } 00:40:24.581 [Pipeline] // stage 00:40:24.587 [Pipeline] } 00:40:24.604 [Pipeline] // node 00:40:24.610 [Pipeline] End of Pipeline 00:40:24.645 Finished: SUCCESS